High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech

23 Jun 2025 16:00h - 17:30h

High Level Session 1: Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech

Session at a glance

Summary

This high-level panel discussion at the Internet Governance Forum focused on the growing influence of big tech companies and their impact on information ecosystems, democratic processes, and human rights. The session, hosted by Norway, brought together government ministers, civil society representatives, and industry leaders to address the challenges of disinformation and the need for transparency in algorithmic systems.


Participants emphasized that disinformation represents an imminent threat to fundamental freedoms, citing recent examples like the annulled Romanian presidential election due to alleged influence operations. Norwegian Minister Lubna Jaffery highlighted how AI has intensified these challenges, making disinformation production faster and more sophisticated than ever before. The discussion revealed that while digital platforms have democratized information access, they have also created echo chambers and enabled the spread of propaganda and false narratives.


Several speakers stressed the importance of media literacy and education as long-term solutions. Estonia’s Minister Liisa Ly Pakosta shared her country’s experience with full digital transparency and announced Estonia’s pioneering AI-driven education program starting in September. The panel debated different approaches to combating misinformation, with some advocating for stronger regulation while others emphasized the need to protect freedom of expression.


TikTok’s representative Lisa Hayes described the platform’s efforts to combat misinformation through automated detection systems, fact-checking partnerships, and transparency measures. However, Reporters Without Borders’ Thibaut Bruttin argued for more systemic changes, including mandatory prominence for quality journalism and stronger accountability measures for tech companies. The Council of Europe’s Bjorn Berge highlighted new international treaties on AI and human rights as frameworks for addressing these challenges.


The discussion concluded with calls for multi-stakeholder cooperation, emphasizing that no single actor can solve these complex issues alone, and that protecting democratic discourse requires collective effort from governments, tech companies, media, and civil society.


Keypoints

## Major Discussion Points:


– **Platform Power and Information Ecosystem Control**: The discussion centered on how big tech companies have gained immense influence over political, social, and economic realms, with concerns about algorithmic opacity and whether citizens and nations are “losing the information space” to private corporate interests.


– **Disinformation as a Democratic Threat**: Panelists extensively discussed how disinformation campaigns, particularly those with state backing (specifically mentioning Russian influence operations), pose immediate threats to democratic processes, elections, and social stability, with examples from Romania, Ukraine, and other countries.


– **Transparency and Accountability in Algorithmic Systems**: A significant focus was placed on the need for meaningful transparency in how algorithms work, with debates about what constitutes effective transparency – from technical code access to plain-language explanations that users can actually understand.


– **Multi-stakeholder Collaboration vs. Regulatory Approaches**: The panel explored different approaches to governance, with some advocating for multi-stakeholder cooperation between governments, tech companies, media, and civil society, while others argued for clearer regulatory frameworks and stronger government oversight.


– **Balancing Freedom of Expression with Safety**: Throughout the discussion, panelists grappled with the fundamental challenge of combating misinformation while preserving free speech rights, emphasizing that the fight against disinformation must not suppress legitimate expression.


## Overall Purpose:


The discussion aimed to examine how to maintain human rights and build resilient societies in an era where big tech platforms dominate information flows. The session sought to identify strategies for ensuring transparent, accountable information ecosystems while protecting democratic values and fundamental freedoms.


## Overall Tone:


The tone was serious and urgent throughout, reflecting genuine concern about threats to democratic institutions. While maintaining diplomatic courtesy, there was an underlying tension between different stakeholder perspectives – particularly between the TikTok representative defending industry practices and other panelists calling for stronger regulation. The tone remained constructive and solution-oriented, with panelists emphasizing the need for collaboration despite their different viewpoints. The discussion maintained a sense of cautious optimism that collective action could address these challenges, even while acknowledging the complexity and urgency of the issues at hand.


Speakers

– **Natalia Becker-Aakervik**: Moderator of the session


– **Lubna Jaffery**: Minister of Culture and Equality in Norway


– **Liisa Ly Pakosta**: Minister of Justice and Digital Affairs in Estonia


– **Thibaut Bruttin**: Secretary General for Reporters Without Borders, President of the Forum on Information and Democracy


– **Lisa A. Hayes**: Head of Safety for Public Policy and Senior Council for the Americas at TikTok


– **Bjorn Berge**: Deputy Secretary General of the Council of Europe


– **Lucio Adrian Ruiz**: Secretary for the Dicastery of Communication in the Holy See (Monsignor)


– **Session video**: (Appears to be introductory content/video material rather than a speaker)


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

# Comprehensive Report: Big Tech’s Influence on Information Ecosystems and Democratic Governance


## Executive Summary


This high-level panel discussion, hosted by Norway at IDF 2025, examined the growing influence of major technology companies on global information ecosystems and their implications for democratic processes and human rights. The session, moderated by Natalia Becker-Aakervik, brought together government ministers, civil society representatives, industry leaders, and international organisation officials to address urgent challenges posed by disinformation and the need for transparency in algorithmic systems.


The discussion revealed a complex landscape where digital platforms have simultaneously democratised access to information whilst creating echo chambers that fragment society. Participants emphasised that disinformation represents an imminent threat to fundamental freedoms, citing recent examples such as the annulled Romanian presidential election amid allegations of influence operations. The conversation highlighted tensions between protecting freedom of expression and combating harmful misinformation, with speakers exploring approaches ranging from regulatory frameworks to multi-stakeholder collaboration.


Key areas of consensus emerged around the fundamental threat disinformation poses to democracy, the need for meaningful transparency from platforms, and the importance of protecting freedom of expression whilst combating false information. However, significant disagreements surfaced regarding governance models, the effectiveness of media literacy versus systemic solutions, and the appropriate balance between industry self-regulation and public oversight.


## Key Participants and Perspectives


**Government Representatives**: Norwegian Minister of Culture and Equality Lubna Jaffery provided opening remarks and national policy perspectives, while Estonian Minister of Justice and Digital Affairs Liisa Ly Pakosta offered insights from Estonia’s comprehensive digital transparency experience.


**Civil Society**: Thibaut Bruttin, Secretary General for Reporters Without Borders and President of the Forum on Information and Democracy, provided critical perspectives on platform accountability and press freedom.


**Industry**: Lisa A. Hayes, Head of Safety for Public Policy and Senior Council for the Americas at TikTok, represented technology sector approaches to content moderation and safety measures.


**International Organisations**: Bjorn Berge, Deputy Secretary General of the Council of Europe, offered insights on international legal frameworks and human rights protections.


**Religious Institution**: Monsignor Lucio Adrian Ruiz, Secretary for the Dicastery of Communication in the Holy See, contributed philosophical and ethical perspectives on technology’s role in human culture.


## Opening Context and Norwegian Strategy


Minister Jaffery opened the discussion by highlighting the timing of the event, noting that Norway had published its national strategy to strengthen resilience against disinformation just the week prior. She framed the discussion around the paradoxical nature of digital transformation, explaining how platforms have democratised expression whilst simultaneously creating echo chambers that fragment society into separate information universes.


Jaffery emphasised that disinformation campaigns specifically target democratic processes, aiming to sway elections, erode solidarity, and create instability. She noted that artificial intelligence has made content production faster and more sophisticated than ever before, with these campaigns directly undermining public trust in electoral systems and governance.


## The Challenge of Platform Power and Information Fragmentation


### Transformation of Information Ecosystems


The discussion examined how digital platforms have fundamentally altered the global information landscape. Jaffery highlighted that whilst digital platforms have enabled broader participation in public discourse, they have simultaneously created conditions where different groups operate with entirely different sets of facts, undermining social cohesion.


Bruttin provided a more critical assessment, arguing that tech platforms have systematically weakened traditional news media economics, contributing to declining public trust and enabling the political weaponisation of social media. He described this as “a massive disruption of public conversation” where lies and propaganda are spread intentionally by actors who benefit from what he characterised as tech company complicity.


### Challenging Technological Neutrality


Monsignor Ruiz offered a fundamental challenge to widely accepted notions of technological neutrality, arguing that “technology is not neutral. All technology is born with an intention.” He suggested that the design and development of digital platforms embed specific purposes and values that shape their impact on society, requiring a shift from reactive approaches to proactive considerations of technology’s inherent purposes.


Ruiz emphasised that digital culture fundamentally changes human relationships, reality perception, and ethical frameworks, creating “a new culture” rather than simply serving as a tool. He referenced his February 2020 paper outlining six reflection points for ethical technology development: transparency, inclusion, accountability, impartiality, reliability, and security and privacy, noting early signatories including Microsoft and IBM.


## Disinformation as an Existential Democratic Threat


### Scope and Real-World Impact


All speakers agreed that disinformation poses a fundamental threat to democratic processes and societal stability. Pakosta provided concrete examples from Estonia’s experience, describing constant hybrid attacks and information wars spreading across multiple platforms, particularly from hostile state actors like Russia. She emphasised that these attacks are not isolated incidents but part of sustained campaigns designed to destabilise democratic societies.


The urgency was underscored by references to recent events, including what Jaffery described as the Romanian presidential election results being “annulled amid allegations” of influence operations. Berge noted that such campaigns can directly undermine democratic processes and destabilise societies, particularly when targeting electoral systems and democratic institutions.


### Personal Impact on Civil Society


Bruttin shared a personal example of disinformation’s impact, describing how a false video claiming his suicide spread on X (Twitter), thanking TikTok for not hosting such content on their platform. This example illustrated how disinformation campaigns can target individual civil society leaders and the differential responses across platforms.


## Competing Approaches to Transparency and Accountability


### Estonia’s Model of Full Transparency


Pakosta presented Estonia’s approach as a model for comprehensive transparency, explaining that “Estonia stands for full transparency” with systems allowing citizens to check from their mobile phones who has accessed their data and request justification for such access. She announced that Estonia would be “the first country in the world where we start AI-driven education from 1st of September this year,” pioneering AI-driven education from elementary school to help people become better technology users.


Pakosta argued that full transparency is essential for trust, rejecting business justifications for opacity and maintaining that complete transparency is necessary for democratic governance in the digital age. However, she added important caveats about the challenges of international cooperation, noting the need to be wary of authoritarian governments and compromised organisations that may be funded by hostile state actors.


### Industry Perspectives on Meaningful Transparency


Hayes offered a different approach, focusing on “meaningful transparency” requiring plain English explanations of algorithmic systems, regular reporting, and user control tools. She described TikTok’s partnerships with fact-checking organisations covering “more than 60 languages and 100 markets globally” and announced the launch of a new “footnotes” feature in April that enables community-driven context for content.


Hayes presented statistics on TikTok’s automated content moderation, noting that 80% of removed content is identified through automation and 98% is removed proactively before users report it. She argued that transparency must be accessible and understandable to users rather than simply providing technical disclosures that few can comprehend.


### The Public Utility Framework


Bruttin introduced a fundamental reframing of the transparency debate, arguing that “it’s a fantasy to believe that digital space is a private space run by tech companies. We have delegated it to them. It’s something that’s owned by the public to some extent. It’s a public utility, and we need to claim it.”


This perspective challenged basic assumptions about ownership and control of digital spaces, suggesting platforms should be treated as public utilities requiring democratic governance rather than private enterprises with minimal oversight. Bruttin advocated for must-carry provisions requiring platforms to onboard and give prominence to verified news content, moving beyond content removal to actively promoting reliable information.


## The Multi-Stakeholder Governance Debate


### Fundamental Disagreement on Governance Models


One of the most significant disagreements centred on governance approaches, directly challenging foundational principles of internet governance. Bruttin provided a sharp critique of traditional multi-stakeholder approaches, arguing that “governments should not sit on the same side of a table as tech companies. I mean, governments are here to govern and legislators to make laws, and we should not too much mix all that.” He expressed concern about regulatory capture and the potential for tech companies to co-opt governance processes.


In contrast, Hayes advocated for collaborative approaches, arguing that “we are only going to succeed if we work together in whatever format, whether multi-stakeholder or sitting at your round tables, but if we work together to get the best ideas in one place.” Berge similarly emphasised that addressing these challenges requires cooperation across different sectors and stakeholders.


### Defining Distinct Roles


Despite disagreements about governance models, there was consensus that different stakeholders must maintain distinct roles and responsibilities. Even supporters of multi-stakeholder approaches acknowledged that governments, tech companies, civil society, and international organisations should not be treated as equals in governance processes but should maintain their specific functions and accountabilities.


The moderator acknowledged TikTok’s difficult position in participating in these discussions, recognising the challenges platforms face in engaging with governance debates whilst being subject to criticism about their practices.


## Media Literacy Versus Systemic Solutions


### The Education Approach


Pakosta and Jaffery emphasised media literacy as essential for long-term resilience. Pakosta argued that “media literacy requires long-term perspective and teaching children to be intelligent users of information in the digital age,” highlighting Estonia’s pioneering AI education programme. Jaffery similarly emphasised supporting editorial-led media and ensuring media literacy as key responses to disinformation.


Berge highlighted the need for better understanding of generational differences in information consumption, noting surveys showing significant numbers of young people turning to influencers for information. He called for more research on youth behaviour and information consumption patterns.


### The Systemic Change Argument


However, Bruttin strongly disagreed with this emphasis, stating: “I’m afraid I disagree with media literacy as being one of the main solutions. I mean, the choices need to be made up front. We need to have a systemic change.” He argued that focusing on media literacy places the burden on individuals rather than addressing systemic problems created by platform design and business models.


This disagreement highlighted fundamental differences in approach between those advocating for individual empowerment through education and those calling for structural changes to platform governance and design.


## Technology Solutions and Their Limitations


### Automated Systems and Human Oversight


Hayes described TikTok’s approach combining automated detection with human review processes, arguing that AI enables identification of harmful misinformation at scale whilst maintaining human oversight for complex decisions. She emphasised that emerging technologies can help detect and mitigate misinformation when safety and security are prioritised by design.


However, this industry perspective on successful automated moderation created tension with civil society criticism. Bruttin’s assertion that tech companies are complicit in spreading disinformation directly contradicted claims of effective self-regulation, highlighting gaps between industry claims and civil society assessments of platform performance.


### Beyond Content Removal


Several speakers emphasised that effective responses must go beyond simply removing harmful content. Bruttin argued that technology should be used to promote reliable information and reward quality journalism, not just remove bad content. Berge supported this approach, calling for creative methods to elevate trustworthy information rather than focusing solely on content removal.


## Human Rights and Democratic Values


### Balancing Freedom and Safety


Throughout the discussion, speakers grappled with the fundamental challenge of combating misinformation whilst preserving freedom of expression. There was strong consensus that the fight against disinformation must protect rather than suppress freedom of expression, with solutions focusing on promoting truth and transparency rather than censorship.


Jaffery emphasised that “the fight against disinformation must protect rather than suppress freedom of expression,” arguing that responses should focus on ensuring the availability of accurate information rather than prohibiting false information. Berge stressed that human rights aspects must be part of platform design, with proportional responses that respect freedom of expression.


### Addressing Inequality and Social Justice


Jaffery connected information integrity to broader social justice issues, arguing that “a part of strengthening resilience to disinformation campaign is an inclusive and a just society.” She emphasised that technologies must not reproduce or strengthen inequality and discrimination, citing examples of biased facial recognition and algorithmic systems.


This perspective broadened the discussion beyond technical and regulatory approaches to include social and economic dimensions of building resilient information ecosystems.


## International Cooperation and Legal Frameworks


### Binding International Standards


Berge highlighted the development of new international legal frameworks, particularly the Council of Europe’s international treaty on Artificial Intelligence and Human Rights, Democracy and Rule of Law. He argued that international cooperation through binding treaties and enforceable legal standards is necessary to address the global nature of information manipulation.


This approach recognises that disinformation campaigns often cross national boundaries and that effective responses require coordinated international action rather than purely national regulatory approaches.


### Regional Adaptation


Hayes described TikTok’s work with regional safety and youth advisory councils, recognising that content policies and safety measures must be adapted to local contexts whilst maintaining consistent global standards. This highlighted the challenge of balancing universal human rights principles with regional cultural considerations.


## Areas of Strong Consensus


Despite significant disagreements on implementation approaches, the discussion revealed remarkable consensus on fundamental principles:


### Core Threats and Values


All speakers agreed that disinformation poses a fundamental threat to democratic processes and societal stability, requiring urgent and coordinated responses. There was universal agreement that the fight against disinformation must protect rather than suppress freedom of expression, with solutions focusing on promoting truth and transparency rather than censorship.


### Need for Meaningful Transparency


Participants across all sectors agreed that meaningful transparency from tech platforms is crucial for building public trust. This includes not just technical disclosures but understandable explanations, public oversight mechanisms, and user control tools that enable citizens to understand and influence systems affecting their access to information.


## Unresolved Challenges and Future Directions


### Implementation Complexities


The discussion highlighted numerous unresolved issues requiring further attention, including balancing content moderation with freedom of expression, addressing the fundamental economic disruption of traditional media by tech platforms, and developing effective international enforcement mechanisms for digital governance standards.


### Research and Development Needs


Several speakers called for additional research, particularly on youth behaviour and information consumption patterns, the anthropological and ethical implications of digital culture, and effective methods for promoting reliable information to combat disinformation.


### Systemic Versus Individual Solutions


The debate between systemic changes to platform design and business models versus individual-focused solutions like media literacy remains unresolved, with different stakeholders advocating for different approaches based on their assessment of where responsibility should lie.


## Conclusion


This comprehensive discussion revealed both the complexity of challenges facing democratic societies in the digital age and the potential for coordinated responses when stakeholders align on fundamental principles whilst maintaining distinct roles and responsibilities. The strong consensus on core values—protecting democracy, ensuring transparency, and preserving freedom of expression—provides a foundation for future cooperation despite disagreements on specific implementation approaches.


The conversation demonstrated that addressing big tech’s influence on information ecosystems requires moving beyond simple regulatory or technological solutions to consider fundamental questions about power, ownership, and democratic governance in digital spaces. The challenge ahead lies in translating shared principles into effective policies and practices that can protect democratic values whilst enabling the benefits of digital innovation.


The urgency expressed by all participants, combined with their commitment to finding solutions that protect both security and freedom, suggests that continued dialogue and cooperation across sectors will be essential for building resilient information ecosystems that serve democratic societies. As the moderator noted, the conversation would continue throughout the week’s IGF sessions, reflecting the ongoing nature of these critical discussions about the future of democratic discourse in the digital age.


Session transcript

Session video: The influence of global tech companies is growing across the political, social and economic realms. While the actions of big tech shape our information ecosystems, transparency is often lacking, including on how the platform’s priorities and interests shape algorithms. Disinformation represents an imminent threat to fundamental freedoms. It can induce societal polarization, distrust and instability. Fighting disinformation requires measures that ensure media and information freedom, support literacy and enforce transparency and accountability from online platforms. At the same time, the fight against disinformation must protect freedom of expression. How can we ensure a transparent and responsible information ecosystem with an informed public conversation, protection of human rights, free editorial media and resilient citizenries? As big tech assumes an even greater role, are we as citizens and as nations losing the information space?


Natalia Becker-Aakervik: Hello everybody and welcome. And to some of you, welcome back. It’s lovely to see you here again to the high-level session, Losing the Information Space. Losing the Information Space? Ensuring Human Rights and Resilient Societies in the Age of Big Tech. In this session, presented by the IDF 2025 proud host country, Norway. I’m Natalia Becker Aakervik, your moderator. Also a huge welcome and welcome back to our online global audience who are watching from all corners of the world. Now in this session, societies. who, as we see, enjoy immense benefits from their participation on platforms. But there are also threats as ethics, safety, and negative social impacts may be neglected for leverage in the global AI space or the global AI race. Now, disinformation represents an imminent threat to fundamental freedoms. It can induce polarization in society, distrust, and instability, as we have seen. So fighting disinformation really requires measures to ensure media and information, freedom and literacy, and transparency and accountability on behalf, on the part of the online platforms to really mitigate these risks of potential misuse of platform power. So at the same time, the fight against disinformation must protect freedom and expression. And this requires a balancing act, a really delicate balancing act, if we talk about it, between security and fundamental freedoms. So as big tech assumes an even greater role in our communication and in our communication infrastructures are we, as citizens and as nations, losing the information space? That is a question that has been asked here today. And hopefully our esteemed speakers and panelists will be able to answer that in part. You’re also welcome to continue these conversations and try and connect with people and try and find answers and ways of working collaboratively together to create and maintain that delicate balance that we spoke about. How can we ensure a transparent and responsible information ecosystem with an informed public conversation, a protection of human rights, free editorial media, as well as resilient citizenries? Those are the questions we hope you’ll be keeping in your mind as we dive into these panel discussions and hear from our speakers. So this high-level leaders’ track session will really discuss the ethical and governance challenges. posed by Platform Power. Panelists will unpack how user attention capture leaves open risk for mis- and disinformation campaigns and explore the opacity of algorithmic priorities in the global AI race, as we mentioned. And also, we will see if we can debate some strategies to bolster transparency and accountability in this space. So I’m going to be introducing our speakers and they’re going to come on stage and we’re going to get into the conversation. Are you ready? Are you ready? Fantastic, that’s great to hear. So first, we have Ms. Lubna Jaffery, Minister of Culture and Equality in Norway. We have Liisa Ly Pakosta, Minister of Justice and Digital Affairs in Estonia. We have Mr. Thibaut Bruttin, Secretary General for Reporters Without Borders. We have Ms. Lisa A. Hayes, Head of Safety for Public Policy and Senior Council for the Americas and TikTok. Mr. Bjorn Berge, Deputy Secretary General of the Council of Europe. And we have Monsignor Lucio Adrian Ruiz, the Secretary for the Dicastery of Communication in the Holy See. Please give us, or join me in giving them a warm round of applause as they join us on stage. Thank you. Thank you. Yes, let’s welcome them on stage with some more warm applause, ladies and gentlemen. We are very privileged and blessed here to have our representatives proudly from Norway and from all parts of the world who have all flown in to be here with us today and to have these very important conversations that will carry forward the work. So a warm welcome to you, our panelists, and thank you so much for joining us. Now I would like to invite opening remarks by Minister Jaffery, Norwegian Minister of Culture and Equality. Minister, the stage is yours.


Lubna Jaffery: Excellencies, colleagues, experts, distinguished guests. I am delighted to be here on the first of five days of what I am sure will be constructive and intriguing talks about how we can work together to ensure an open, safe and free internet. Some of these talks arise from great challenges. Seven months ago, the results of the first round of the Romanian presidential election were annulled amid allegations of widespread influence operations and social media disinformation. This incident is not isolated. Manipulation on online platforms has become a well-known challenge to countries around the globe. According to reports from the Norwegian Defence Research Establishment, similar tactics have been used in attempts to mislead citizens in countries such as the United States, France, Georgia and South Korea, just to name a few. The advancement of generative AI has only intensified this challenge. Today, disinformation can be produced and spread at an unprecedented scale and sophistication. AI-generated content that mimics real people is now widely accessible. These manipulative efforts are often subtle and can be deeply harmful. Propaganda and false narratives can spread like wildfire across social media, undermining trust in our fellow citizens, institutions and the very fabric of our societies. The goals behind such campaigns are clear. To sway elections, erode solidarity, disrupt public discourse, and create instability. The consequences are not abstract. Dense information can inflict real democratic, physical and economic harm. While it is not new that actors who aim to destabilise societies and manipulate individuals use information as a weapon, information is also our strongest defence. Independent news media offers reliable sources to information, and this information thrives when there is a lack of an independent and diverse media landscape. However, news media’s ability to perform their function as watchdogs and providers of reliable information are challenged by big tech platforms. Journalistic content is an important part of what the platforms offer, but they are unwilling to share data, and the terms for media companies are still not satisfactory. Through transparency, knowledge and free expressions, we need to ensure that the truth is easily available to those who look. When this is properly ensured, it is a recipe for success. In other words, the solution is not to prohibit expressions or untruths. The fight against disinformation needs to save, not suppress, freedom of expression. We must also ensure that inequality and discrimination is not transferred to or even amplified by the technologies we rely on. To ensure human rights and proper resilience across citizens, in the age of big tech, we must safeguard equality and inclusion. As the Norwegian government has stressed in our newly published national strategy to strengthen the resilience against disinformation, a robust media system, high levels of trust and high levels of media literacy and source awareness are important tools. Studies have found these factors to be of major significance in the terms of how vulnerable a country is to disinformation. The strategy also emphasizes that we will hold big tech companies accountable and demand that they accept that their central role in our information space bring great obligations. Addressing this crisis requires more than national action. It demands global cooperation and strong regulations to hold tech companies accountable. But regulation alone is not enough. We need a united front, civil society, industry, academia, politicians and decision makers working together to combat disinformation while safeguarding freedom of expression. Only through collective effort can we build resilience societies that protect human rights, foster innovation and preserve the integrity of our democracies. Thank you very much.


Natalia Becker-Aakervik: Thank you so much Minister Jaffery for delivering those opening remarks and we are looking forward to having a meaningful conversation. Stakeholder, multi-stakeholder approach to internet governance is vitally important as we’ve said before. And the context within which these conversations, challenging difficult conversations where we really are looking for ways forward and solutions to take the work forward in a good way are vitally important. So we thank all of you for being here today and all the representatives representing the various voices of the engagement. So our first question is digital platforms have transformed how information is accessed and shared. What are some of the broader societal impacts you’ve observed from data-driven and engagement-oriented systems and how can public interest governance address these dynamics? I’m going to ask that question only one more time because I know you only have three minutes each to answer. So Miss Liisa Ly Pakosta, over to you.


Liisa Ly Pakosta: Thank you so much and it is really an honor to be here and thank you for these opening words. I fully agree what you said. And I also think that if we could do everything that you said, that would be great. So the issue we are actually talking here is how to fulfill these very good perspectives we have as normal human beings in the era of democracy that we want to protect and the freedom of speech that we want to protect while having new technological opportunities for also the bad guys available. But also not only the technological variety is out there, but also we can see that nothing from the physical life has also changed. So let us remember that also the false information before elections was widely spread in Pompeii, which was found from the archaeological excavations and it was 79 year Anno Domini. So it is nothing new in the way we are dealing with, but we are dealing with this in the era of huge information revolution. So what has changed? I come from Estonia, which is a fully digitalized country where people have high trust to digital services, digital government, but we are also a neighboring country to Russia. And what we see, what has changed is that we see constant hybrid attacks. So the information wars are spread on many more platforms than it has happened so far. And you very well took out the example from Romania where the elections actually were attacked by Russia. So this is something that we clearly see on just new technological levels. The second thing is that what has changed is the education. So normally we have been used that we teach the children, they learn to read books and newspapers and we have some time with the children to discuss these issues. Now we are in a totally different situation and what we believe in Estonia is that we have to teach the children to be good users, clever users of the information that is out there and around there. Thank you so much.


Natalia Becker-Aakervik: Thank you so much Liisa Ly. We want to, Liisa Ly, we want to also perhaps continue a little bit on that point in our next question as well. So thank you so much for your contribution. Then Mr. Thibaut Bruttin, Secretary General of Reporters Without Borders. What is your response to that question?


Thibaut Bruttin: So coming from a movement that historically and internationally defends journalism, I think we need to acknowledge the progress made in collecting and spreading information via these new tech platforms that have been built. I mean that’s the obvious thing that we may forget about in the current time of the history of tech companies today because there is so much dissatisfaction and so much talk about disinformation that maybe we tend to forget that in order to collect, to disseminate news, there has been huge opportunities and the news media have seized it to some extent. But that being said, we also see the risk because today the economy of the media, the news media has been deeply weakened by the tech platforms. We have also seen a decline in the trust of the public. We’ve seen political movement weaponizing social media to their benefit. And also we’ve seen tech companies not endorsing their democratic role. So I think the main statement I’d like to make is that there are three things that we need to rethink. First of all, democracy shall win. I mean, I think nobody believes that anymore. We know that we need to protect democracy, that it’s a long-standing, continuous, daily effort. The truth will not prevail if we do not protect our information space. And third, it’s a fantasy to believe that digital space is a private space run by tech companies. We have delegated it to them. It’s something that’s owned by the public to some extent. It’s a public utility, and we need to claim it. We need to restore democratic guarantees in the digital space. That’s what RSF believes in, and what the Forum on Information and Democracy, which is a sister entity of RSF, believes in. And I’m also the president of the Forum. Thank you.


Natalia Becker-Aakervik: Thank you so much, Thibaut. And Mr. Bjorn Berge, what is your response to that in your three-minute answer?


Bjorn Berge: Thank you very much, and first of all, a very good afternoon to all of you. And of course, some of the more concerning impact is, of course, related to what has already been mentioned by other speakers, the spread of misinformation, disinformation, hate speech, and so forth, which really aims very deliberately to manipulate public opinion and to destabilize even societies, and certainly also can have an impact where it actually undermines democratic processes. And the minister also referred to some of the recent examples we have from elections. And then you really go to the core of a democracy. And this type of disinformation campaigns, this type of manipulation we have seen in several elections over the years. So it’s a huge problem. And so the question is then next, what are we going to do to combat disinformation? I come from the Council of Europe. Of course, we focus very much also on standard setting, legal approach to many of these issues. And we have issued last year, as a matter of fact, a guidance note to all our 46 member states, all of Europe. And here we give some very concrete advice how to go forward. It’s related to fact-checking and platform design, and also user empowerment. We also have to deal with the more criminal aspects of this, the Cybercrime Convention, which over 70 countries, not only in Europe, but globally has joined. And they have activities now in over 130 countries. So I think this is also an important part of the work we do in this regard. But more concretely, how to address this? There’s a number of issues I can mention. We will maybe come back to it also, I don’t know with the time, but of course, it’s related to media literacy. And this is certainly not done overnight. I mean, here we need to have a long-term perspective, 5, 10 years, 15 years. It’s related to fact-checking, as I already said. But there we are always too late. It’s already had an impact, but still it’s crucial. And then it’s also, it’s just related to how we can suspend or withdraw content. But again, by who? Yes, social media themselves, platforms have a role to play. But we have also seen a trend where this is getting more difficult and less a priority for some of them. And it’s also about legislation, as I said. Of course, the EU Digital Service Act is very useful, but it’s also an issue of enforcement and implementation. So, there are all these issues as well, but to start with, we really need to go deeper. We need to know how systems are used or misused and how the algorithms, really, how their behaviour of it and also how they are used in this context. I don’t know if I’m close to three minutes.


Natalia Becker-Aakervik: Thank you so much. We’ll come back to that in the next question. Perhaps we’ll have a little time to round off, but now I’d like to… So, thank you for your contribution. Now I’d like to go over to Monsignor Lucio Adrian Ruiz. Monsignor Ruiz, how would you answer that question?


Lucio Adrian Ruiz: Well, I think that we need to understand that the digital culture touches many areas of human life. But the deepest ones come from the field of anthropology and also ethics. Because the digital culture touches the relationship between the persons, also the perception of the reality, the time, and also the way where we find answers to existential questions. That is an approach to the reality that is really different in our culture. It means that also the ethical things are different, because if we see the reality… in a different way, we act in a different way. It means that everything that is digital is not just an instrument that conforms and realises a new culture. That is really important, because it is used to us to think that the technology is neutral, and it is not. Because all technology is born with an intention. And after, we can apply another intention with the user using the technology. It is used to us to say that the technology is neutral, we can use it for good or for bad, but it is not. It is born with an intention. It means that it is necessary to act well the formation, the legislation and the research. Also in the field of the anthropological field, and also in the ethical field. Thank you so much for that input.


Natalia Becker-Aakervik: Thank you, Monsignor Ruiz and Minister Jeffrey. Would you like to respond to that question?


Lubna Jaffery: Yes, sure. I hope you’re also going to ask TikTok about this, because we are all talking about the big tech platforms and TikTok is present here. So first of all, I think it is important to acknowledge that the digital platform, in a sense, is a part of a democratization, because it gives and it allows a lot of us to speak what we want to say. We want to express what we want to say. But the problem is that this is also there is also a lot of people that are silenced and not even heard. That is also the problem. And we know also that these platforms and the algorithms, they make us live in certain universes. So I’m a part of one universe. My neighbor is part of another universe. And we don’t meet because we are part of echo chambers that enable the things we want to enable and the things we want to mean. So that is also a problem because the common ground is lost in a sense when you also use a lot of time on the digital platforms. And we also know that digital platforms have had incidents where they have spread hate, radicalization, that is a problem. So I think what we need and the Norwegian government’s response to this is that we need to support the media for instance. This is important and I know a lot of countries think this is very strange that the Norwegian government support editorial led media. But this is one of our main responses against disinformation. Because we need to have media literacy, we need young people to understand that social media can be positive but it can also be negative. And you can’t use social media to know all the facts every time. You need editorial led media and that is one of our strongest responses in the strategy we have launched last week.


Natalia Becker-Aakervik: Thank you so much Lubna. And of course thank you including TikTok in this conversation which is very important. It’s not an easy space to be in. We thank you for showing up in good faith and for being part of this conversation where all the voices need to be present in order to take this forward. So thank you Lubna. And Lisa the next question directed at you first. Emerging technologies are impacting both the spread and efficacy of misinformation. So what is their potential for identifying and mitigating misinformation and how can their adoption be scaled responsibly? So each speaker has four minutes to answer this question. Lisa please go ahead.


Lisa A. Hayes: Thank you so much for having me here today and for TikTok being part of this really important conversation. I’m not sure that’s a four minute question to be fair. I think it is more like a four day question or at a minimum a four hour question. So I hope that this is just. the beginning of the conversation for those of us who are here this week, because I’m only going to be able to scratch the surface, but I’ll give it a shot. There must be something in the name, because I do want to echo something that Lisa nodded at, the other Lisa, in her opening comments. This is not new. I mean, some of the issues we are facing in the digital space now are new platforms, but every time we’ve had a major technological leap forward, we have had problems of misinformation and disinformation. We started with the printing press and we wound up with tabloids at the supermarkets, telling you that the celebrity is having alien babies. We started with radio and we wound up with shock jocks. We started with commercial broadcast television and we wound up with cable stations, sort of on the edge. As a society globally, we have engaged in this digital literacy. We have figured out how to spot truth from lies and we’ve learned how to digest all of this information. I’m not generally seen as an optimist, but I am a realist. We will come to grasp with everything that is happening with the new emerging technologies and I think that we will, as societies, benefit from them more than we’re harmed by them. I also want to nod at the role that mainstream journalism has brought to technologies. I know on TikTok, for example, if you want to learn from the Wall Street Journal, or the Washington Post, or the BBC, or the Guardian, all of those news entities do have verified accounts and they put out very entertaining information that is reaching a whole audience and community that they would not otherwise be able to reach. Dare I say it, even the United Nations has a verified TikTok account and is using that account to push out information to a billion people around the world who otherwise might not be looking for that information. So that’s the sort of positive sense of what we are doing. But beyond providing authoritative information, emerging technologies need to prioritize safety by design. We need to prioritize security by design. At TikTok, our policies prohibit harmful misinformation, regardless of the poster’s intent, and we remove accounts that repeatedly post this type of information. To go to your question, we do detect this misinformation using automated technology, user reports, proactive intelligence gatherings from experts, as well as our fact-checking partners. Technology is used to help us do this work at scale. To help us do this work rapidly, even, so that we can catch problems before they begin. Currently, 80% of the content that we remove from our platform is identified through automation and technology. And as videos gain in popularity and start to gain more views, they go through additional rounds of content review to make sure that they comply with our platform guidelines. When we pair technology with human moderation, we have found that we are able to remove violative content 98% of the time, proactively, before it’s reported to us as a platform. And we would not be able to do that without some of these new forms of content moderation. AI is enabling us to identify harmful misinformation, to segregate it as part of a video, and to send it to human review, to send it to fact-checkers for independent assessment. And so those are some of the benefits of the things that we are able to do with automation right now. And as more creators explore AIGC, it is imperative that companies continue exploring new ways, new methods of advancing safety. And I know I’m at my four minutes, because there’s a large clock right in front of me. But we’ll continue this conversation. Thank you.


Natalia Becker-Aakervik: Thank you so much, Lisa, for that input. And we’re looking forward to the responses also from our panel. Mr. Bjorn Berge. Over to you, how would you answer that question in your four minutes?


Bjorn Berge: Yeah, in terms of meeting the challenges of disinformation and misinformation, yeah. I started really focusing a little on it already, but this is such a fundamental issue, I mean, for Europe, for the world, and what is important is that we also have a clear strategy. And some of the elements in such a strategy is, of course, related to media literacy, as I said, also the issue of fact-checking, and also how we can ensure the suspension or the withdrawal of certain content. And also there is a legislative part to this, with certain need for regulations in this area. Already the EU Digital Services Act is, of course, very useful, but here, as I said, there is also a question of how we can help enforce it even better and secure its implementation. And maybe also there is time, you know, developments go so quickly here, maybe there is time also to reflect a little on what are the lessons learned so far when it comes to this area of legislation and regulations. And so we need really to go into also how the systems are actually used and misused, and maybe we also need to understand human behavior better, particularly young people. And here we need actually more research. It seems today that young people go elsewhere, and today influencers have a huge impact. And there was a survey in the United States of young people, and 40 percent of them said that they go to influencers. I don’t know the situation here in Norway or in other European countries, and that’s why I call for more research. and evidence on this and also a certain focus on youth and young people in regard to this. And also could we even be more creative? I mean we talk about the negative sides of this, how can we promote the reliable positive information, news information, to combat in a way lies and disinformation and how can that be lifted up? I know we have the Secretary General here of the Reporters Without Borders, they have taken a very important initiative I think which is called Journalism Trust Initiative and this is a method of how we can have a system of giving you and me trustworthy and more reliable information. I think this is a good way of going and could we also oblige the social media platforms to do more of this, to focus also on reliable positive news, to help people actually. We also had a big hackathon in Strasbourg where we had young people from all over Europe and we challenged them on this issue about disinformation and what were their views, how would they tackle this and we also had some experts there and one of the concrete examples and recommendations they made was do we need maybe also a new set of a new convention perhaps on disinformation and foreign influence that can bind all governments and bring all governments together. That was one of the issues and also the convention on the right to information is essential today I think and this is related to what I already said. So there are many other ideas around this but I think this is such a fundamental issue and we need a multi-stakeholder approach to this. It’s not enough that governments sit and discuss this among themselves. We need to have all the users, the academics, the young people, the researchers, the main providers themselves, the main actors themselves.


Natalia Becker-Aakervik: Thank you so much for lifting that up again. The multi-stakeholder approach, of course, a core tenet to this conversation. Thank you so much and for lifting up also the work that Reporters Without Borders is doing. We’re going to also come back to you in our next question, Thibaut, but to round off this question, how would you respond, Minister Jaffery, in three minutes? Sorry.


Lubna Jaffery: It’s okay. So, from a policy perspective, I think scaling the adoption of emerging technologies responsibly is crucial. So, first of all, it’s important that emerging technologies do not reproduce or even strengthen inequality and discrimination. We have, unfortunately, seen examples of technologies that maintain biases and discrimination, such as facial recognition systems that cannot detect people of color, or algorithms that reproduce gender biases. So, tech companies need to be very careful of the many ways they can ingrain deep-seated prejudice in society into their technologies. This is also a part of combating disinformation. A part of strengthening resilience to disinformation campaign is an inclusive and a just society. This facilitates trust, stability, and the abilities of citizens to take part in an open and informed public discourse. It’s also crucial that technologies do not undermine human rights, such as the freedom of speech, when they aim to mitigate disinformation.


Natalia Becker-Aakervik: Thank you so much for that input, and thank you for answering that question, all our panelists. So, as we move on to the third question, for which we also have four minutes each to answer, timekeeping. However, these are such big questions. We truly appreciate the input that you’re managing to. summarize in these pockets of meaningful conversation. And also, Minister Jaffery, thank you so much for joining us. We really appreciate your time. We understand that you have more obligations to take care of today. So thank you for joining us. And a big round of applause for Minister Jaffery. Thank you. Liisa Ly, the third question. We’ll start with you. There are increasing calls for transparency in algorithmic systems. In your view, what constitutes meaningful transparency? What kinds of transparency practices, technical, operational or communicative, can help to build public trust?


Liisa Ly Pakosta: Thank you. Estonia stands for full transparency. And this is because, as I said before, Estonia is a fully digital state. All our government services are digital and people give their data to government for interoperability. This means we need a huge amount of trust from our citizens to make it fully operational. And we have guaranteed this through the full transparency. People can check from their mobile phones who has checked their data. You can go to the Internet and you have full picture of who has taken a look on your data. And you can ask why this person had this look on my data. And if he or she didn’t have any legal grounds, one gets punished. So everything is fully transparent. Even we have e-voting and even the voting system is fully open and transparent. So the transparency is absolutely needed for the trust, but also for whatever controls there might be needed. There is not only from the government side control, but also the society itself. So we definitely stand for full transparency also by the media companies because what you very rightly said, this is a public space. And it has to be fully transparent. This is the only tool with which we can guarantee that there is no way for discrimination or other bad things and also misinformation. And of course it’s much more tricky if we are thinking of machine learning and things like this. But again, we have to find ways how also this is fully transparent to everybody. And all the arguments against it, like there are business purposes, etc., we wouldn’t agree. So we have a real experience how the full transparency helps for trust.


Natalia Becker-Aakervik: Thank you so much for your response to that. I’m going to go over to Monsignor Ruiz to answer the same question. Increasing calls for transparency in algorithmic systems. What kind of transparency practices, technical, operational or communicative, do you think can help to build public trust? Please go ahead.


Lucio Adrian Ruiz: Accessibility to code, sources, system architectures is the first answer because it is knowing what it does and how it does. But we need also to know the planning, the implications, consequences, interrelationships and also the vision of the future. It’s necessary because this aspect can clarify the why. the how of these technologies. We cannot have a partial view of the present. We need to know the genesis and development in order to evaluate the trust of the systems. Precisely because a system acts in a systemic way, in a related way, it is necessary to know the other actors that make it up. The presentation of the system must be accompanied by the ability to understand it, interpret it, which requires knowledge in the same dimension as the person that produces it for equality to understand the language. But there is another aspect, that of the possibility of moderation. The moderation must involve the participation of the government in a hand and in the other hand all institutions that represent the collective, the society, and have the appropriate authority to do it. The development of the system that shapes the society and also the culture, as I said before, cannot be autonomous business model and activities. Because the world society, culture, and also the present, the future, and also the design of the humanity are a res publica, that means society for all, that need to active contribution of those who receive the services.


Natalia Becker-Aakervik: Thank you, Monsignor, for your contribution. We truly appreciate that. And then, with that same question, I would like to hand the word to you, Mr. Thibaut Bruttin. What would be your response?


Thibaut Bruttin: Well, I think we have to acknowledge where we are and it’s obvious that today, disinformation is not like a downside of tech companies. In our perspective, there is a massive disruption of the way the public conversation is happening. So, there is disinformation and anybody can actually put what he wants in disinformation. These are like words that we need to better define. What we see are lies, propaganda being spread intentionally by actors with the complicity either willingly or unwillingly of some of the tech companies. I’m just giving you an example, for example, between Christmas and New Year’s Eve, I was on holidays and I received a video where I was presented, it looked like a very legitimate legacy media video spreading on TikTok, which explained that I’d committed suicide. I had not, I was alive, I’m still alive. But this was spreading on X. And how is that possible that after claiming that it was false, after giving my identity, providing all the information, this video is still on X? So, that’s why I think we need to revise the pact that is uniting these tech companies with the civil society and the governments. We owe transparency to the public, but I’m afraid I disagree with media literacy as being one of the main solutions. I mean, the choices need to be made up front. We need to have a systemic change within the way we relate. to this digital space. And when talking about transparency, we need to be clear about the fact that it’s not about chasing the bad. It’s not about taking down propaganda, which can equate censorship to some extent. It’s about also promoting the good, about rewarding journalism worthy of that name. Because what is a conversation worse if it’s not based on facts? And media are not perfect. News media are not perfect. I’m never gonna say that. But still, they are the closest attainable version of the truth. And we need to reward that by the algorithm. That’s why at RSF, we do champion a must-carry provision, which means that tech companies should onboard necessarily news content identified as such. But also they should go further. They should give due prominence, an increased visibility to news content that show responsible, analytical use of journalism. And they need also to reward them financially by neighboring rights and appropriate moderation. If you go to Ukraine, for example, and you see the number of content that’s taken down, and that is labeled as propaganda or infringement of users’ rights and terms of use, it’s insane. It’s not the journalism content that is not compliant with terms of use. It’s the world. Yes, when you report about wars, you need to show bodies, and you need to show war scenes, and so on. So we need to have a media exemption, a news media exemption that truly reflects what we need in a public conversation. And governments that are strong enough and good-willing enough to engage in that direction, I think they can only prepare for the future of a restored public conversation.


Natalia Becker-Aakervik: Thank you so much for your response to that. Lisa, over to you in the last three minutes remaining.


Lisa A. Hayes: Do I think transparency is important? Yes. Should I yield my time? First, I am incredibly sorry there was a video created of you and I hope that we took it down in a more timely manner than Axe. It was not on TikTok. Oh, excellent. Okay, well, I have no comment about that one. So, yes, as to transparency. I agree with Monsignor, though, that it can’t just be transparency. It needs to be meaningful transparency that people understand. Algorithmic systems are not magic, they’re math. But if you don’t have a sufficient understanding of that math, you’re not going to understand what our disclosures are. You know, my 15-year-old daughter is somewhere in the audience here and every time she downloads an app, I get a notification asking if she can download it. And I have to go read a 40-page privacy policy. I’m not sure that’s really meaningful transparency. So what we’ve been trying to do at TikTok is figure out ways to communicate to everybody who uses our platform how the algorithm works. And so we tell people what the signals are that the algorithm relies on and we tell them in plain English so that you don’t need to be a mathematician. You can actually understand that if you’re engaging with videos, if you’re liking videos, if you’re sharing videos, if you’re commenting on videos, those are the signals that our system is going to use. And, of course, it’s more complicated and we give a lot of examples in it. But that is step one of transparency, is to make sure that it is meaningful transparency about how the system works. For us, step two is reporting on that transparency. Are there fake videos created that were posted that should not have been posted? How long did it take for us to get them down? How many of them did we take down? Did we identify it or did somebody else identify it? And all of that information we produce voluntarily on a quarterly basis and we do it in a machine-readable format so that people who are researchers trying to study changes quarter over quarter can download that information and compare trends across the different graphs to help better inform public policy. And the third and final thing that I think is so important… is that we want to make sure that we are giving people tools that allow them to have transparency about their experience. At TikTok, we give them content controls so that they can manage topics, they can use keyword filters, they can say they’re not interested, they can refresh their algorithmic feed. These are all ways that people can control their 4U feed and their own algorithmic experience on TikTok. And finally, we have touched on media literacy, but to the point of videos that put up false information, I think it’s important that at TikTok, we have prompts that discourage people from sharing content that may be unverified. We label content created with AIGC, and we try to put forward authoritative sources and resource centers, partnering with credible fact checkers to make sure that we are getting people searching for harmful health misinformation, access to accurate and reliable health information. And with that, once again, I have a time is up sign.


Natalia Becker-Aakervik: That’s great, Lisa. We know we cut slightly into your time. There’s anything you want to add in a minute or so before we go to the next question? Lisa.


Lisa A. Hayes: For me, these are complex issues. They are tough issues, and the platforms are not a monolith. I don’t know as we’re designing systems that we’re able to do much without a lot more research and a lot more tools. And so we are here and we are committed to partnering with everybody in this room and to having those conversations for getting constructive feedback, for hearing suggestions on how to improve and working to implement those. And I’m really excited to continue that conversation.


Natalia Becker-Aakervik: Thank you so much for that. Bjorn, we’ll come back to you, and we’re hopefully gonna have time for everybody to give a short closing comment, but we’re gonna go to the next question, and then you are able to also to respond. I just wanted to give Lisa a fair amount of time that everybody else also on the panel had, and try and be a good timekeeper as well and fair in all. So the next question that we’re going to go to, that we’re going to ask you to respond to, is how can governments… and tech companies and media and civil society work together to foster public resilience against information manipulation in digital ecosystems. There may be some overlap, but we’re going to ask you to answer the questions. Again, it’s three to four minutes. Miss Liisa Ly Pakosta, to you.


Liisa Ly Pakosta: Thank you. As you mentioned that the governments could have a good control and then you gave an example of how the pictures, real pictures from Ukraine are not allowed in social media because they include dead bodies. If there is anybody left in the world who still believes that Russia is a democratic country, then we have an issue with misinformation anyway. And we had it actually beforehand, also before the social media. So, there are governments that are very much interested in to the whole control of the information that is out there. And this is the value also why we have gathered here to find the ways how to protect our democracies, how to protect our values, how to protect free speech in the era where I do not agree with my own husband always what is allowed to our kids and what is not. We have totally different views sometimes on this fact. So, what can governments and non-government organizations do together in the situation where there are some governments who totally want to have a dictatorship control over the citizens? And we have to admit that there is a lot of misuse of non-governmental organizations as well. So, at least in Estonia we see that some of them are fully financed by an aggressive country, Russia, and they do not operate as a non-governmental organization should. So, we have a lot of issues around these questions as well. But if we have democratic government, real non-government organization, and good innovative companies, then this complex can do wonderful things together. So I fully believe in humankind in a positive way, that we find a way out. And I really believe, what we also have a lot of experience in Estonia, that really the private companies can have very good innovation that is extremely useful for the governments. So let’s work together in this way. But, as said, there is a lot of questions around this, and a lot of things that are not working well.


Natalia Becker-Aakervik: Thank you so much for your input, Ms. Pakosta. Lisa, we’re going to go back to you for this question. If you would like to respond to that, please.


Lisa A. Hayes: Line the Lisas up, going down the road now. I mean, as I already said, at TikTok we clearly believe in the multi-stakeholder process, and our partnerships are just incredibly important to us. We also believe in the importance and the need for experts and for fact-checkers. On our partnerships and our collaborators, several of whom I see in the room, I can’t thank you enough. These partnerships with civil society and global institutions are critical to the work that we’re doing, and discussions like these at IGF are critical to informing our policies and our products. Let me just touch briefly on fact-checkers. Through TikTok’s global fact-checking program, we work closely with more than 20 IFCN accredited fact-checkers. They are assessing the accuracy of content, and they are supporting informed, responsible moderation decisions. Together, we have more than 20 global fact-checking partners, and they cover more than 60 languages and 100 markets globally. I’m incredibly proud that at a time when other companies are pulling back, we are continuing to work with them. continuing to lean into this space. We’ve also started safety and youth councils. So these are advisory councils made up of independent experts in different parts of the world who know the regions that they are giving us advice about. Sitting in North America, I don’t know the right solutions for Estonia. I can’t see what’s coming around the corner. And so it’s critically important that we have these regional safety councils with people who do have that expertise and will bring it to us and share it with us so that we can try to solve problems before they find their ways onto the platform. We currently have 10 of these regional councils, including in sub-Saharan Africa, Southeast Asia, Europe, Brazil. And we are deeply grateful for the advice of those experts. And finally, we’ve talked a little bit about election misinformation and disinformation today. And that is an area that we are really working hard on. We have consulted with and launched several different media literacy campaigns ahead of elections specific to the market. We have provided users with trusted information and media literacy resources. For example, in the recent German election, the electoral commission actually made a video with information about how to vote and key information for voters, which we then promoted inside the TikTok election resource center, directing people who were searching for information in that market about the election to authoritative information. One new feature I will call out that we’ve actually recently launched before wrapping, because I see my countdown going, we recently launched footnotes in April. And this is a new product feature that will give our community more context about content on TikTok. It will draw on the collective knowledge of the community by allowing people to add relevant information to content on our platform. Right now, we are just testing footnotes in the United States as an intervention, but it’s important to note that it is intended to complement not to replace our existing content moderation and global community guidelines. Our content moderation and our fact-checking work will continue with footnotes adding additional context. So, look forward to continuing that conversation as well.


Natalia Becker-Aakervik: Thank you so much. Thibaut, over to you. How would you answer this question? How can governments, tech companies, media, civil society work together faster public resilience against information manipulation in the digital ecosystems? You have four minutes to answer the question.


Session video: We need to think ahead and also to build a coalition that goes beyond the sole interest of each actor. I don’t believe so much in multi-stakeholderism. I mean, every time you put people around the table and they represent… Like, governments should not sit on the same side of a table as tech companies. I mean, governments are here to govern and legislators to make laws, and we should not too much mix all that. The Forum on Information and Democracy that RSF created, for example, is a civil society-led organisation feeding legislators and regulators and governments. And I think we should really organise the conversation. That being said, my message to tech companies is clear. Pre-empt legislation. Do good. Provide due prominence to news media worthy of that name. Do take into account as a signal of authority the Journalism Trust Initiative that Bogdane mentioned earlier, which is a standard endorsed by about 2,000 media from 130 countries. Governments should not be afraid of regulating and legislating. We need to have policymakers understanding that their responsibility is to build the framework, not to go into the nitty-gritty details of everything happening in the media field, but really building this framework that enables media to flourish. Media themselves need to reflect. They need to reinvent themselves. They are facing the Gen-AI move, and they’ve frankly not done a great job at facing the digital disruption of the early 2000, 2010. So, I think it’s important to try and gather around what is the added value of journalism. And finally, I’d like to put the emphasis on the ecosystem in which the news media operate. It’s more than the media and the tech and government and citizens. There are also a multiplicity of actors, and especially thinking of advertisers, who have decided, and it’s their right, to relocate most of their advertising, about 80%, towards tech companies. They have totally fled the legacy media field to some extent. And maybe tech companies can provide them with a granularity, a brain time of the people that the news media has never been able to provide. But is it quality? And is it also a democratic responsibility to shift totally towards digital? I think they need to reflect on that.


Natalia Becker-Aakervik: Thank you so much, Timo, for that answer. Coming over to you, Mr Bjorn Berge, what is your response to that question in the four minutes that we have?


Bjorn Berge: Thank you very much for raising that question, because the issue of coming together and working together is essential in really addressing these types of fundamental challenges. And a lot of this we are discussing also goes really to the core of what are really our fundamental rights as citizens and human beings. And coming together, there’s some good news I want to mention here in this sense. Together with the European Union, with the United States, with Australia, New Zealand, with academic researchers, with civil society. With the industry itself, with the actors themselves, the Council of Europe has now agreed and concluded a new international treaty on Artificial Intelligence and Human Rights, Democracy and Rule of Law. The first international treaty of its kind. And why do I say this? Because it brings certain obligations. And one of them is directly related to what we are discussing here, is of course, there is a specific reference to digital literacy and skills and an obligation for countries and with the assistance of others to promote that. And I also referred earlier to the guidance note that we have issued to all 46 European countries on this. This also goes into concrete action to support fact-finding organizations and ensure their independence and also that they can operate transparently and also have the ability to continue sustainably over time to do what they are doing. It addresses issues such as platform governance and here it is important that you have this human rights aspect as part of the design. There is a compliance to that. And also how we can go in and limit activities or accounts or moderate or remove them. But again, here we have to be careful also because we have to respect principles of proportionality but also it should be a last resort. We all are strong believers in freedom of expression. So, this is really important. It also is about a commitment of how we can give users more empowerment and how we can build resilience and And all together it’s about how states, platforms and media come together and assure reliable and quality information.


Natalia Becker-Aakervik: Thank you. You wanted to wrap up with a sentence?


Bjorn Berge: No, this is not like an app you can go down. This is a social contract. This is a contract where we come together and we want to address this issue seriously.


Natalia Becker-Aakervik: Thank you so much. We will have the opportunity because we have been really good with time. I want to thank the panel that after we get an answer from Monsignor Ruiz we will have a couple of minutes to give some final remarks. So I ask you to reflect on that and see what are going to be the parting words that you would like to leave our audience with, our local and global audience as well. And to pose the question to you, Monsignor Ruiz, how can governments, tech companies, media and civil society work together to foster this public resilience against information manipulation in digital ecosystems?


Lucio Adrian Ruiz: First of all, I think that the government, technology companies, media is one part. But I think that all the educational institutions, academics, other institutions that make up the society need to take part. Institutions that in various ways represent and act on behalf of the community need to take part. In other words, everyone must participate and take interest. Not to be just consumers, but be architects of their own life. How to do it? In one hand, with the I think it is necessary to do it in a legislative way, because it is the way to promote for everybody, and also to accompany and control, because also it is necessary to do it. But also mobilizing the education to accompany all the society to work with the culture. This is a paper made in February 2020 that is a call to reflect in three ways, ethics, education and rights. And we invited different states and also institutions to sign in order to be together thinking about this culture. The first to sign was Microsoft, IBM also Italy, to participate in this thing. And we had in this paper six points to reflect, that was the transparency, inclusion accountability, impartiality, reliability and security and privacy. And that means that all together in the same table we are thinking about how to apply really in the facts, proposing the algorithms, that means the ethics inside the algorithms. That is a concrete case, how we can do as government and governance, change the reality or accompany the changing reality.


Natalia Becker-Aakervik: Thank you, we truly appreciate your input and your response to that question. some good time left, so I would like to invite our panel to really give your final messages and just to share what you would like the audience here, the global audience to take away from this conversation, whether it is an insight, an inspiration, a call to action, what would you most like the audience to take away from this and what is your final message on this? So we’ll start with you again, Liisa Ly Pakosta.


Liisa Ly Pakosta: Thank you. I will pick up from Mr. Ruiz the expression that everybody should be the architect of one’s life. And the question what we are discussing here actually is that, yeah, if some architects are used to build stone houses or Norway has wonderful old wooden houses, but then there will be spread and information on social media that it’s a good idea to build hay houses. And they won’t stand there. Everybody knows who knows something about architecture, but it becomes a social phenomenon, which is actually bad for many people. This is the issue that we are discussing here. So who would be the good guy to say that this is not a good information and what would be the administrative or ethics run by the companies to take it down as quickly as possible so that it would not be harmful for many people, maybe only for some. So the question, how we find good solutions that are not dictatorship governments taking down all the freedom of speech. And that is not the regulations that take down the information that we spread about the war in Ukraine. Or maybe there are some new brilliant ideas. Maybe somebody has invented a good way to build hay houses that actually are better than wooden houses or stone houses. So I really believe that we as humankind can globally find good solutions, although we do not know very many good ones at the present moment. But we have to act together, especially the democratic countries have to act together to find these solutions.


Natalia Becker-Aakervik: Thank you so much, Liisa, Ly. And just for our panellists to know that you have four minutes to answer this question. Is there anything you want to add before we go over to Thibaut?


Liisa Ly Pakosta: Yeah, thank you for this possibility. So you have one more minute. Trust, trust is the main thing. And I started with education. Estonia is the first country in the world where we start AI-driven education from 1st of September this year. So we will experiment a lot, but what we really believe is that people have to have the knowledge. It’s not only the media literacy, but it’s also the literacy of how we live in this technological world so that we would be the best users of technology, also of the AI. And I’m absolutely certain that in this way, that we are open for education, we are open for innovation, we can beat all the bad things from the technology side, we as human beings.


Natalia Becker-Aakervik: Thank you so much for those parting words. Thibaut, what would be your message here, your takeaway, your message that you want to leave as well as a call to action?


Thibaut Bruttin: Maybe I think it’s important to understand that we need to put ourselves in solutions mode. And it’s obvious that if you’re an expert in problem, the more complicated the problem is, the happier you are. When you are an expert in solution, you need to find easy tasks that you can perform. in order to succeed. And we obviously are facing a moment in the history of societies where we have to ask ourselves what is the model of societies we want? Do we want to leave digital space to private interest? Do we want to leave it open to propaganda? Do we want the public conversation to be destructed, destroyed maybe, and the ability to keep civil concord as a reality? Obviously, this is a very political choice in the most political or the most interesting sense of meaning of the world politics. And that’s a solution and a question that we have to ask ourselves collectively because nobody is totally entitled to make that choice. Governments, citizens, not users, must make that decision. And the very end question is do we want a public debate based on facts? Or is any fact another opinion? We do face today globally an offensive which is largely triggered from the United States of America to try and present freedom of expression as being endangered by journalism. This is totally preposterous. You can have both freedom of expression and freedom of the press. And that’s actually the meaning of the First Amendment in the US Constitution. So let us not be confused by some of the preposterous statements made currently which are just an expression of what is the interest of those who want deregulation. Freedom of expression is not survival of the fittest extended to the public conversation. We need to be very clear about that. If we’re not, I mean I’m totally okay with people voting about it. But I don’t think it’s the interest of the majority to have a total deregulation of the public space that just favors those who can pay, who speak the loudest. That’s not how we have structured societies in the past. Let us be careful about not taking away any rights to anyone and to preserve freedom of expression and freedom of the press. There is no contradiction between the two. And people that want to make you choose between the one or the other are just people that have an interest in just selecting one and not the other. Thank you.


Natalia Becker-Aakervik: Thank you for your contribution there, Thibaut, and for a powerful message as well. So, Lisa, over to you. What would be your takeaways that you want our local and global audience to take away? What would be your call to action or something you’d like to share?


Lisa A. Hayes: Yeah, I guess I’ll close with a reflection on the topic of this panel. As a reminder, we’re here to talk about ensuring human rights and resilient societies in the age of technology. From an industry perspective, I think that the only way that the digital space can be fun, entertaining, and useful for all of us is if we carefully consider the balance. And by the balance, I mean the balance between the rights of the people who are using technologies, staying that human-centered focus and design on the rights of the people, balancing that against the safety of the community online, and also building resiliency in the technology itself to new threats. And when I say building resiliency to new threats, I mean those threats that have always existed in society and with new technologies, but that emerge differently in the digital space and that they can manifest very quickly. We need to continue to do those three things, rights, safety, and resiliency, in the design of all of these online systems. And frankly, that’s a job for all of us on this stage. It is a job for civil society. It’s a job for industry, for governments, for other experts. And we are only going to succeed if we work together in whatever format, whether multi-stakeholder or sitting at your round tables, but if we work together to get the best ideas in one place and to agree on those ideas and to drive them forward. You know, at TikTok, we are committed to doing our part to help protect our community. We remain committed to fighting harmful misinformation through strong policies and enforcement and our continued work with more than 20 global fact-checking organizations, again, that cover more than 100 markets and 60 languages globally. And we’re also aiming to empower people by connecting them to authoritative information. We label unverified content. We partner on media literacy campaigns that help people think critically about the content that they’re engaging with online. And I hope that our team, we have several people here this week. I hope we can connect with all of you while we are here on the ground. If you have not had a chance to drop past our booth, I hope you will do so. We have more information on all of these efforts and we have a team that would be delighted to connect, to answer your tough questions, and to help build the internet that we all want for tomorrow. That’s why I have my 15-year-old daughter here with me today, to be quite candid. She is the North Star that I bring to all of my work. Do I want to make sure that TikTok is a place where both my 15-year-old daughter and my 80-year-old father, who has different digital literacy issues, can both examine and engage with the world around them and do that well. And I thank everybody here for helping us do that.


Natalia Becker-Aakervik: Lisa, thank you so much for that contribution. We truly appreciate it. And also, Berge, before I go on to you, Bjorn Birger, I just want to mention that Monsignor Ruiz has also expressed a wish to impart some messages in Spanish. So I’m going to, if there is still a need to do that, if you would like to, there is a headset on every chair. I will leave it up to you, but before we get to you, so there is a headset on every chair. If you want to take it up, there are channels, you just find the English channel, 126, and then you have the opportunity to do so, if you would like to. So I’m just going to encourage the audience to do that. We have a couple of minutes before we get to Monsignor Ruiz. You can, of course, express in English for our panel as well, but if there are messages you want to add in Spanish, we’ll give you the opportunity to do so. Everybody has a headset, these are the channels, this is the on button, and you find the English channel. But, so, just to know, there is the possibility. Now, for the final remarks from you, Mr. Berge, what is your takeaway that you want to leave the audience with? Is there a call to action? Over to you.


Bjorn Berge: Oh, thank you. Let me maybe start with what also the Estonian minister has mentioned here, and this is really essential and very important. There is a cyber war ongoing right now, and we are the subjects, the targets, of massive Russian propaganda, misinformation, fake news, and of course, this needs to be combated in every way we can. I mean, this is essential. Secondly, disinformation is also, has become a more growing threat to our democracies. And we referred to it earlier also, but it even goes to the core of a democracy, which is the election, and how elections are being manipulated. And also with generative AI, it’s making it even faster and cheaper and harder to control. So what we really need is a stronger intergovernmental cooperation and clear, enforceable legal standards to… confront the threat. And not these type of patchwork fixes or after-the-fact reactions, but something robust, future-proof rules. And these must be grounded in human rights and designed for global relevance. At the same time I believe we must work across borders, sectors and disciplines to counter disinformation and foster public resilience. Because no single actor can can fix this alone and but it’s only together we can create a more resilient information space and ultimately, hopefully, a democratic digital space where actually facts matter and rights are respected and voices are heard and not manipulated. So maybe these are my final words.


Natalia Becker-Aakervik: Thank you so much for that. Monsignor Ruiz.


Lucio Adrian Ruiz: Well, we are in the really amazing time. A challenged time for everybody. Well, for this reason I think that first we need always seek the place of the human person that must be always in the center. Second is to promote the person, all person and the whole person. Third, help this deeply cultural process to be a process for everyone, made by everyone. And finally, involve people to be not a user, but a part of the life and the culture. Thank you.


Natalia Becker-Aakervik: Thank you so much for that. Thank you, Monsignor, for your contributions. Thank you also for all our panelists and your contributions to this important conversation. We thank you so much. We have really come to the end of the discussion here today and powerful calls for action and messages that have come from our panelists. We thank you also to you, our audience, who have been here and also to those of you globally who are watching for your attention as well. And I would like us also now just to say a very big thank you to our panel, if we would give them a warm round of applause, please. Thank you so much. And to our panelists, and just before we have a group photo that we’re going to have, I would also like to invite all of our audience, those of you locally and those of you globally, back here tomorrow in this main plenary hall. We’re going to be having a number of sessions this entire week by the sessions here in the conference room, presented by the host country, Norway, for the IGF 2025. Very proud host country. We look forward to seeing you back here tomorrow as well. Conversations like the one we had today, where we take a deep dive into the challenges and the opportunities really facing society in these times today. So we really thank you for your attention, and we look forward to seeing you again. Thank you very much. And to our panelists, I’m going to ask you to stay on stage for a group photo, so we would really appreciate that. And then also to the rest of you, thank you for your time. We will see you. tomorrow.


L

Lubna Jaffery

Speech speed

131 words per minute

Speech length

1065 words

Speech time

484 seconds

Digital platforms have democratized expression but created echo chambers that fragment society into separate information universes

Explanation

While digital platforms allow people to express themselves freely, they also create algorithmic echo chambers where individuals exist in separate information universes, losing common ground and preventing meaningful dialogue between different perspectives.


Evidence

Algorithms make us live in certain universes where ‘I’m a part of one universe. My neighbor is part of another universe. And we don’t meet because we are part of echo chambers’


Major discussion point

The Growing Influence and Challenges of Big Tech Platforms


Topics

Human rights | Sociocultural


Disinformation campaigns aim to sway elections, erode solidarity, and create instability, with AI making content production faster and more sophisticated

Explanation

Malicious actors use disinformation strategically to manipulate democratic processes and destabilize societies. The advancement of generative AI has intensified this challenge by enabling unprecedented scale and sophistication in producing and spreading false content.


Evidence

Romanian presidential election results were annulled amid allegations of widespread influence operations and social media disinformation; similar tactics used in US, France, Georgia, South Korea; AI-generated content that mimics real people is now widely accessible


Major discussion point

Disinformation as a Threat to Democracy


Topics

Human rights | Cybersecurity


Agreed with

– Liisa Ly Pakosta
– Thibaut Bruttin
– Bjorn Berge

Agreed on

Disinformation poses a fundamental threat to democratic processes and societal stability


Technologies must not reproduce or strengthen inequality and discrimination, and should facilitate inclusive public discourse

Explanation

Tech companies must be careful not to embed societal biases into their technologies, as this undermines efforts to combat disinformation and build resilient societies. Inclusive and just societies are better equipped to resist disinformation campaigns.


Evidence

Examples of facial recognition systems that cannot detect people of color, or algorithms that reproduce gender biases


Major discussion point

Protecting Human Rights and Democratic Values


Topics

Human rights | Development


The fight against disinformation must protect rather than suppress freedom of expression

Explanation

Efforts to combat disinformation should strengthen rather than weaken freedom of expression. The solution involves ensuring truth is easily available through transparency, knowledge, and free expression rather than prohibiting expressions or untruths.


Evidence

Norwegian government’s strategy emphasizes that ‘the solution is not to prohibit expressions or untruths’ and that fighting disinformation ‘needs to save, not suppress, freedom of expression’


Major discussion point

Protecting Human Rights and Democratic Values


Topics

Human rights


Agreed with

– Thibaut Bruttin
– Bjorn Berge
– Session video

Agreed on

The fight against disinformation must protect rather than suppress freedom of expression


Supporting editorial-led media and ensuring media literacy are key responses to disinformation, helping young people understand social media limitations

Explanation

Government support for independent, editorial-led media is crucial for providing reliable information sources. Media literacy education helps citizens, especially young people, understand that social media cannot provide all facts and that professional journalism remains essential.


Evidence

Norwegian government supports editorial led media as ‘one of our main responses against disinformation’ and emphasizes that ‘you can’t use social media to know all the facts every time. You need editorial led media’


Major discussion point

Media Literacy and Education


Topics

Human rights | Sociocultural


Disagreed with

– Thibaut Bruttin
– Liisa Ly Pakosta

Disagreed on

Role of media literacy in combating disinformation


L

Liisa Ly Pakosta

Speech speed

139 words per minute

Speech length

1369 words

Speech time

590 seconds

Constant hybrid attacks and information wars are spreading across multiple platforms, particularly from hostile state actors like Russia

Explanation

Estonia, as a neighboring country to Russia, experiences constant hybrid attacks and information warfare campaigns that spread across many more platforms than previously possible. This represents a new technological level of information warfare targeting democratic societies.


Evidence

Romania elections were attacked by Russia; Estonia sees constant hybrid attacks as a neighboring country to Russia


Major discussion point

Disinformation as a Threat to Democracy


Topics

Cybersecurity | Human rights


Agreed with

– Lubna Jaffery
– Thibaut Bruttin
– Bjorn Berge

Agreed on

Disinformation poses a fundamental threat to democratic processes and societal stability


Full transparency is essential for trust, allowing citizens to monitor who accesses their data and ensuring accountability

Explanation

Estonia’s fully digital government operates on complete transparency, where citizens can check from their mobile phones who has accessed their data and can demand accountability if access was unauthorized. This transparency model should extend to media companies operating in the public space.


Evidence

Estonia’s digital government allows people to check who accessed their data, with punishment for unauthorized access; even e-voting system is fully open and transparent


Major discussion point

The Need for Transparency and Accountability


Topics

Human rights | Legal and regulatory


Agreed with

– Lucio Adrian Ruiz
– Lisa A. Hayes
– Session video

Agreed on

Transparency and accountability from tech platforms are essential for public trust


Disagreed with

– Lisa A. Hayes

Disagreed on

Approach to transparency requirements


Media literacy requires long-term perspective and teaching children to be intelligent users of information in the digital age

Explanation

Traditional education models where children learn from books and newspapers with time for discussion are no longer sufficient. Society must teach children to be clever, intelligent users of the vast information available in digital environments.


Evidence

Contrast between traditional education with books/newspapers and current situation where children need to learn to be ‘good users, clever users of the information that is out there’


Major discussion point

Media Literacy and Education


Topics

Sociocultural | Human rights


Disagreed with

– Thibaut Bruttin
– Lubna Jaffery

Disagreed on

Role of media literacy in combating disinformation


Democratic countries, real NGOs, and innovative companies can work together effectively, but must be wary of authoritarian governments and compromised organizations

Explanation

Effective collaboration requires distinguishing between legitimate democratic actors and those with malicious intent. Some governments seek dictatorial control over information, and some NGOs are funded by aggressive countries like Russia and don’t operate as genuine civil society organizations.


Evidence

Estonia sees NGOs ‘fully financed by an aggressive country, Russia, and they do not operate as a non-governmental organization should’; governments that ‘totally want to have a dictatorship control over the citizens’


Major discussion point

Multi-stakeholder Collaboration and Governance


Topics

Human rights | Cybersecurity


Estonia is pioneering AI-driven education starting from elementary school to help people become better technology users

Explanation

Estonia is the first country to implement AI-driven education from the beginning of the school year, focusing on teaching people to be the best users of technology and AI. This educational approach emphasizes knowledge and technological literacy as tools to overcome negative aspects of technology.


Evidence

Estonia is ‘the first country in the world where we start AI-driven education from 1st of September this year’


Major discussion point

Media Literacy and Education


Topics

Sociocultural | Development


T

Thibaut Bruttin

Speech speed

149 words per minute

Speech length

1246 words

Speech time

501 seconds

Tech platforms have weakened news media economics, declined public trust, and enabled political weaponization of social media

Explanation

While tech platforms have provided opportunities for collecting and disseminating news, they have fundamentally disrupted the media landscape by weakening news media economics, contributing to declining public trust, and allowing political movements to weaponize social media for their benefit.


Evidence

Tech companies have not endorsed their democratic role; there has been a decline in trust of the public; political movements have weaponized social media


Major discussion point

The Growing Influence and Challenges of Big Tech Platforms


Topics

Human rights | Economic


Disinformation represents massive disruption of public conversation, with lies and propaganda spread intentionally by actors with tech company complicity

Explanation

Rather than being a mere side effect, disinformation represents a fundamental disruption of how public conversation happens. Lies and propaganda are intentionally spread by malicious actors, often with the willing or unwilling complicity of tech companies that fail to adequately address the problem.


Evidence

Personal example of a false video claiming the speaker had committed suicide that remained on X platform despite being reported as false with full identification provided


Major discussion point

Disinformation as a Threat to Democracy


Topics

Human rights | Sociocultural


Agreed with

– Lubna Jaffery
– Liisa Ly Pakosta
– Bjorn Berge

Agreed on

Disinformation poses a fundamental threat to democratic processes and societal stability


Democratic guarantees must be restored in digital space, which should be treated as public utility rather than private domain

Explanation

Digital space should not be considered a private domain run by tech companies but rather a public utility owned by the public. Society needs to reclaim this space and restore democratic guarantees within it, rather than leaving it entirely to private interests.


Evidence

Digital space is ‘something that’s owned by the public to some extent. It’s a public utility, and we need to claim it’


Major discussion point

Protecting Human Rights and Democratic Values


Topics

Human rights | Legal and regulatory


Agreed with

– Lubna Jaffery
– Bjorn Berge
– Session video

Agreed on

The fight against disinformation must protect rather than suppress freedom of expression


Technology should be used to promote reliable information and reward quality journalism, not just remove bad content

Explanation

Rather than focusing solely on removing problematic content, platforms should actively promote and reward quality journalism. This includes implementing must-carry provisions for news content and giving due prominence to responsible journalism through algorithmic design.


Evidence

RSF champions ‘must-carry provision’ requiring tech companies to onboard news content and give ‘due prominence, an increased visibility to news content that show responsible, analytical use of journalism’


Major discussion point

Technology Solutions for Content Moderation


Topics

Human rights | Sociocultural


Different stakeholders should maintain distinct roles – governments should govern and legislate rather than sit alongside tech companies as equals

Explanation

Effective governance requires clear separation of roles rather than treating all stakeholders as equals in multi-stakeholder processes. Governments should focus on governing and legislating, while civil society organizations should feed information to legislators and regulators.


Evidence

Governments should not sit on the same side of a table as tech companies. Governments are here to govern and legislators to make laws’; Forum on Information and Democracy is ‘a civil society-led organisation feeding legislators and regulators and governments’


Major discussion point

Multi-stakeholder Collaboration and Governance


Topics

Legal and regulatory | Human rights


Agreed with

– Bjorn Berge
– Lucio Adrian Ruiz
– Lisa A. Hayes

Agreed on

Multi-stakeholder collaboration is necessary but requires clear role definition


Disagreed with

– Lisa A. Hayes
– Bjorn Berge

Disagreed on

Multi-stakeholder governance approach vs. clear separation of roles


B

Bjorn Berge

Speech speed

131 words per minute

Speech length

1696 words

Speech time

773 seconds

Disinformation undermines democratic processes and can destabilize societies, particularly targeting elections

Explanation

Disinformation campaigns deliberately aim to manipulate public opinion and destabilize societies, with particularly concerning impacts on democratic processes like elections. This represents a core threat to democracy that has been observed in several elections over recent years.


Evidence

Recent examples from elections; disinformation ‘really aims very deliberately to manipulate public opinion and to destabilize even societies, and certainly also can have an impact where it actually undermines democratic processes’


Major discussion point

Disinformation as a Threat to Democracy


Topics

Human rights | Cybersecurity


Agreed with

– Lubna Jaffery
– Liisa Ly Pakosta
– Thibaut Bruttin

Agreed on

Disinformation poses a fundamental threat to democratic processes and societal stability


Understanding how systems are used and misused, including algorithmic behavior, is crucial for effective regulation

Explanation

Effective responses to disinformation require deeper understanding of how digital systems and algorithms actually function and how they can be misused. This knowledge is essential for developing appropriate regulatory responses and enforcement mechanisms.


Evidence

Need to ‘know how systems are used or misused and how the algorithms, really, how their behaviour of it and also how they are used in this context’


Major discussion point

The Need for Transparency and Accountability


Topics

Legal and regulatory | Human rights


More research is needed on youth behavior, particularly their reliance on influencers for information

Explanation

Young people are increasingly turning to influencers rather than traditional news sources for information, but there is insufficient research on this trend and its implications. Understanding youth behavior is crucial for developing effective responses to misinformation.


Evidence

Survey in the United States showed ’40 percent of young people said that they go to influencers’; calls for ‘more research and evidence on this and also a certain focus on youth and young people’


Major discussion point

Media Literacy and Education


Topics

Sociocultural | Human rights


International cooperation through treaties like the AI and Human Rights treaty can establish enforceable legal standards

Explanation

The Council of Europe has concluded the first international treaty on Artificial Intelligence and Human Rights, which brings concrete obligations including promoting digital literacy and supporting fact-finding organizations. This represents a model for international cooperation with enforceable standards.


Evidence

New international treaty on ‘Artificial Intelligence and Human Rights, Democracy and Rule of Law. The first international treaty of its kind’ concluded with EU, US, Australia, New Zealand and others


Major discussion point

Multi-stakeholder Collaboration and Governance


Topics

Legal and regulatory | Human rights


Agreed with

– Thibaut Bruttin
– Lucio Adrian Ruiz
– Lisa A. Hayes

Agreed on

Multi-stakeholder collaboration is necessary but requires clear role definition


Human rights aspects must be part of platform design, with proportional responses that respect freedom of expression

Explanation

Platform governance must incorporate human rights considerations into their design and operation. Any content moderation or account limitation measures must be proportional and used as a last resort, maintaining strong respect for freedom of expression principles.


Evidence

Platform governance guidance emphasizes ‘human rights aspect as part of the design’ and that content moderation ‘should be a last resort. We all are strong believers in freedom of expression’


Major discussion point

Protecting Human Rights and Democratic Values


Topics

Human rights | Legal and regulatory


Agreed with

– Lubna Jaffery
– Thibaut Bruttin
– Session video

Agreed on

The fight against disinformation must protect rather than suppress freedom of expression


L

Lucio Adrian Ruiz

Speech speed

95 words per minute

Speech length

818 words

Speech time

515 seconds

Digital culture fundamentally changes human relationships, reality perception, and ethical frameworks – technology is not neutral but born with intention

Explanation

Digital culture represents a profound anthropological and ethical shift that affects how people relate to each other, perceive reality, and approach existential questions. Technology is not neutral as commonly believed, but is created with specific intentions that shape its impact on society.


Evidence

Digital culture ‘touches the relationship between the persons, also the perception of the reality, the time, and also the way where we find answers to existential questions’; ‘all technology is born with an intention’


Major discussion point

The Growing Influence and Challenges of Big Tech Platforms


Topics

Sociocultural | Human rights


Transparency must include accessibility to code, planning, implications, and the ability for public moderation involving government and civil society

Explanation

True transparency requires not just access to technical code but understanding of planning, implications, and consequences of technological systems. Public moderation must involve both government and civil society institutions with appropriate authority to represent collective interests.


Evidence

Need for ‘accessibility to code, sources, system architectures’ and ‘knowing the planning, the implications, consequences, interrelationships and also the vision of the future’


Major discussion point

The Need for Transparency and Accountability


Topics

Human rights | Legal and regulatory


Agreed with

– Liisa Ly Pakosta
– Lisa A. Hayes
– Session video

Agreed on

Transparency and accountability from tech platforms are essential for public trust


All educational institutions, academics, and community representatives must participate as architects of digital culture, not just consumers

Explanation

Creating responsible digital culture requires active participation from all societal institutions, not just governments, tech companies, and media. Educational institutions, academics, and community representatives must take active roles in shaping rather than merely consuming digital culture.


Evidence

Need for ‘all the educational institutions, academics, other institutions that make up the society’ to participate so people can ‘be architects of their own life’ rather than ‘just consumers’


Major discussion point

Multi-stakeholder Collaboration and Governance


Topics

Sociocultural | Human rights


Agreed with

– Thibaut Bruttin
– Bjorn Berge
– Lisa A. Hayes

Agreed on

Multi-stakeholder collaboration is necessary but requires clear role definition


The human person must always remain at the center, promoting all persons and involving people as participants in digital culture

Explanation

Digital development must maintain human dignity and personhood as the central focus, ensuring that technology serves to promote all people comprehensively. People should be active participants and creators of digital culture rather than passive users or consumers.


Evidence

Call to ‘always seek the place of the human person that must be always in the center’ and ‘promote the person, all person and the whole person’


Major discussion point

Protecting Human Rights and Democratic Values


Topics

Human rights | Sociocultural


L

Lisa A. Hayes

Speech speed

166 words per minute

Speech length

2365 words

Speech time

849 seconds

AI enables identification of harmful misinformation at scale, with 80% of removed content identified through automation and 98% removed proactively

Explanation

TikTok uses automated technology combined with human moderation to detect and remove harmful misinformation at scale and speed. The majority of content removal happens proactively before users report it, demonstrating the effectiveness of AI-powered content moderation systems.


Evidence

Currently, 80% of the content that we remove from our platform is identified through automation and technology’ and ‘we are able to remove violative content 98% of the time, proactively, before it’s reported to us’


Major discussion point

Technology Solutions for Content Moderation


Topics

Cybersecurity | Human rights


Emerging technologies can help detect and mitigate misinformation but must prioritize safety and security by design

Explanation

New technologies offer significant potential for identifying and addressing misinformation, but companies must build safety and security considerations into their systems from the ground up. This includes developing new methods for advancing safety as AI-generated content becomes more prevalent.


Evidence

AI is enabling us to identify harmful misinformation, to segregate it as part of a video, and to send it to human review, to send it to fact-checkers for independent assessment’


Major discussion point

Technology Solutions for Content Moderation


Topics

Cybersecurity | Human rights


Meaningful transparency requires plain English explanations of algorithmic systems, regular reporting, and user control tools

Explanation

True transparency goes beyond technical disclosures to include understandable explanations of how algorithms work, regular public reporting on content moderation efforts, and tools that give users control over their algorithmic experience.


Evidence

TikTok tells people ‘what the signals are that the algorithm relies on’ in ‘plain English’; provides quarterly reporting ‘in a machine-readable format’; offers ‘content controls so that they can manage topics, they can use keyword filters’


Major discussion point

The Need for Transparency and Accountability


Topics

Human rights | Legal and regulatory


Agreed with

– Liisa Ly Pakosta
– Lucio Adrian Ruiz
– Session video

Agreed on

Transparency and accountability from tech platforms are essential for public trust


Disagreed with

– Liisa Ly Pakosta

Disagreed on

Approach to transparency requirements


Partnerships with fact-checkers, safety councils, and civil society are critical, with TikTok working with 20+ fact-checking partners across 100 markets

Explanation

Effective content moderation requires collaboration with independent experts, fact-checkers, and civil society organizations who understand regional contexts and can provide specialized expertise. These partnerships are essential for addressing misinformation at global scale.


Evidence

TikTok works with ‘more than 20 IFCN accredited fact-checkers’ covering ‘more than 60 languages and 100 markets globally’; has ’10 regional safety councils’ including experts from different regions


Major discussion point

Multi-stakeholder Collaboration and Governance


Topics

Human rights | Sociocultural


Agreed with

– Thibaut Bruttin
– Bjorn Berge
– Lucio Adrian Ruiz

Agreed on

Multi-stakeholder collaboration is necessary but requires clear role definition


N

Natalia Becker-Aakervik

Speech speed

173 words per minute

Speech length

2554 words

Speech time

884 seconds

Digital platforms bring benefits but also risks when ethics and safety are neglected in the global AI race

Explanation

While societies enjoy immense benefits from participating on digital platforms, there are significant threats when ethical considerations, safety measures, and negative social impacts are overlooked in the competitive rush to advance AI capabilities globally.


Evidence

Societies ‘enjoy immense benefits from their participation on platforms. But there are also threats as ethics, safety, and negative social impacts may be neglected for leverage in the global AI space or the global AI race’


Major discussion point

The Growing Influence and Challenges of Big Tech Platforms


Topics

Human rights | Legal and regulatory


S

Session video

Speech speed

134 words per minute

Speech length

510 words

Speech time

227 seconds

Global tech companies’ growing influence across political, social and economic realms lacks transparency, particularly regarding algorithmic priorities

Explanation

The influence of big tech companies is expanding across multiple domains of society, but there is insufficient transparency about how these platforms operate. This lack of transparency is especially concerning regarding how platform priorities and interests shape the algorithms that determine what information people see.


Evidence

While the actions of big tech shape our information ecosystems, transparency is often lacking, including on how the platform’s priorities and interests shape algorithms


Major discussion point

The Growing Influence and Challenges of Big Tech Platforms


Topics

Human rights | Legal and regulatory


Agreed with

– Liisa Ly Pakosta
– Lucio Adrian Ruiz
– Lisa A. Hayes

Agreed on

Transparency and accountability from tech platforms are essential for public trust


Fighting disinformation requires a balanced approach that protects freedom of expression while ensuring transparency and accountability

Explanation

Combating disinformation is essential for protecting fundamental freedoms and preventing societal polarization and instability. However, this fight must be conducted in a way that preserves freedom of expression through measures that support media literacy, information freedom, and platform accountability.


Evidence

Fighting disinformation requires measures that ensure media and information freedom, support literacy and enforce transparency and accountability from online platforms. At the same time, the fight against disinformation must protect freedom of expression


Major discussion point

Protecting Human Rights and Democratic Values


Topics

Human rights | Legal and regulatory


Agreed with

– Lubna Jaffery
– Thibaut Bruttin
– Bjorn Berge

Agreed on

The fight against disinformation must protect rather than suppress freedom of expression


There is a critical question of whether citizens and nations are losing control of the information space as big tech assumes greater roles

Explanation

As technology companies take on increasingly central roles in communication infrastructure and information distribution, there are growing concerns about whether democratic societies are losing sovereignty over their information environments. This raises fundamental questions about who controls the spaces where public discourse occurs.


Evidence

As big tech assumes an even greater role, are we as citizens and as nations losing the information space?


Major discussion point

The Growing Influence and Challenges of Big Tech Platforms


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

Disinformation poses a fundamental threat to democratic processes and societal stability

Speakers

– Lubna Jaffery
– Liisa Ly Pakosta
– Thibaut Bruttin
– Bjorn Berge

Arguments

Disinformation campaigns aim to sway elections, erode solidarity, and create instability, with AI making content production faster and more sophisticated


Constant hybrid attacks and information wars are spreading across multiple platforms, particularly from hostile state actors like Russia


Disinformation represents massive disruption of public conversation, with lies and propaganda spread intentionally by actors with tech company complicity


Disinformation undermines democratic processes and can destabilize societies, particularly targeting elections


Summary

All speakers agree that disinformation is not merely a side effect of digital platforms but a deliberate weapon used to manipulate elections, destabilize societies, and undermine democratic institutions. They recognize this as an urgent threat requiring coordinated response.


Topics

Human rights | Cybersecurity


The fight against disinformation must protect rather than suppress freedom of expression

Speakers

– Lubna Jaffery
– Thibaut Bruttin
– Bjorn Berge
– Session video

Arguments

The fight against disinformation must protect rather than suppress freedom of expression


Democratic guarantees must be restored in digital space, which should be treated as public utility rather than private domain


Human rights aspects must be part of platform design, with proportional responses that respect freedom of expression


Fighting disinformation requires a balanced approach that protects freedom of expression while ensuring transparency and accountability


Summary

There is strong consensus that combating disinformation should strengthen rather than weaken freedom of expression. Solutions should focus on promoting truth and transparency rather than censorship or suppression of speech.


Topics

Human rights | Legal and regulatory


Transparency and accountability from tech platforms are essential for public trust

Speakers

– Liisa Ly Pakosta
– Lucio Adrian Ruiz
– Lisa A. Hayes
– Session video

Arguments

Full transparency is essential for trust, allowing citizens to monitor who accesses their data and ensuring accountability


Transparency must include accessibility to code, planning, implications, and the ability for public moderation involving government and civil society


Meaningful transparency requires plain English explanations of algorithmic systems, regular reporting, and user control tools


Global tech companies’ growing influence across political, social and economic realms lacks transparency, particularly regarding algorithmic priorities


Summary

All speakers agree that meaningful transparency from tech platforms is crucial for building public trust. This includes not just technical disclosures but understandable explanations, public oversight mechanisms, and user control tools.


Topics

Human rights | Legal and regulatory


Multi-stakeholder collaboration is necessary but requires clear role definition

Speakers

– Thibaut Bruttin
– Bjorn Berge
– Lucio Adrian Ruiz
– Lisa A. Hayes

Arguments

Different stakeholders should maintain distinct roles – governments should govern and legislate rather than sit alongside tech companies as equals


International cooperation through treaties like the AI and Human Rights treaty can establish enforceable legal standards


All educational institutions, academics, and community representatives must participate as architects of digital culture, not just consumers


Partnerships with fact-checkers, safety councils, and civil society are critical, with TikTok working with 20+ fact-checking partners across 100 markets


Summary

While all speakers support multi-stakeholder approaches, they agree that different actors must maintain distinct roles and responsibilities rather than treating all stakeholders as equals in governance processes.


Topics

Human rights | Legal and regulatory


Similar viewpoints

Both ministers emphasize the critical importance of media literacy education and supporting professional journalism as fundamental responses to disinformation, particularly for young people who need to understand the limitations of social media as an information source.

Speakers

– Lubna Jaffery
– Liisa Ly Pakosta

Arguments

Supporting editorial-led media and ensuring media literacy are key responses to disinformation, helping young people understand social media limitations


Media literacy requires long-term perspective and teaching children to be intelligent users of information in the digital age


Topics

Human rights | Sociocultural


Both speakers advocate for proactive approaches that promote quality information and journalism rather than focusing solely on content removal, while recognizing the need for better understanding of how information consumption patterns are changing.

Speakers

– Thibaut Bruttin
– Bjorn Berge

Arguments

Technology should be used to promote reliable information and reward quality journalism, not just remove bad content


More research is needed on youth behavior, particularly their reliance on influencers for information


Topics

Human rights | Sociocultural


Both speakers recognize that digital technology represents a fundamental cultural shift requiring new approaches to education and human development, emphasizing the need for people to actively shape rather than passively consume digital culture.

Speakers

– Lucio Adrian Ruiz
– Liisa Ly Pakosta

Arguments

Digital culture fundamentally changes human relationships, reality perception, and ethical frameworks – technology is not neutral but born with intention


Estonia is pioneering AI-driven education starting from elementary school to help people become better technology users


Topics

Sociocultural | Human rights


Unexpected consensus

Tech platform representative acknowledging the need for stronger regulation and oversight

Speakers

– Lisa A. Hayes
– Thibaut Bruttin
– Bjorn Berge

Arguments

Meaningful transparency requires plain English explanations of algorithmic systems, regular reporting, and user control tools


Democratic guarantees must be restored in digital space, which should be treated as public utility rather than private domain


Understanding how systems are used and misused, including algorithmic behavior, is crucial for effective regulation


Explanation

It is unexpected that the TikTok representative largely aligned with civil society and regulatory perspectives on the need for transparency, accountability, and treating digital space as a public utility rather than defending purely private interests.


Topics

Human rights | Legal and regulatory


Agreement on the limitations of purely technological solutions to disinformation

Speakers

– Lisa A. Hayes
– Thibaut Bruttin
– Bjorn Berge

Arguments

AI enables identification of harmful misinformation at scale, with 80% of removed content identified through automation and 98% removed proactively


Technology should be used to promote reliable information and reward quality journalism, not just remove bad content


Human rights aspects must be part of platform design, with proportional responses that respect freedom of expression


Explanation

Despite representing different perspectives, there was unexpected consensus that technology alone cannot solve disinformation problems and that human oversight, quality journalism promotion, and rights-based approaches are essential complements to automated systems.


Topics

Human rights | Cybersecurity


Overall assessment

Summary

The speakers demonstrated remarkable consensus on core principles: disinformation threatens democracy, freedom of expression must be protected while fighting misinformation, transparency and accountability are essential, and multi-stakeholder collaboration is necessary with clear role definitions. There was also agreement on the importance of media literacy, supporting quality journalism, and treating digital space as having public utility characteristics.


Consensus level

High level of consensus on fundamental principles and approaches, with differences mainly in implementation details rather than core values. This strong agreement across diverse stakeholders (government ministers, civil society, tech industry, international organizations, religious institutions) suggests a solid foundation for coordinated action on digital governance and disinformation challenges. The consensus indicates potential for effective policy development and implementation if stakeholders can maintain their distinct roles while working toward shared objectives.


Differences

Different viewpoints

Multi-stakeholder governance approach vs. clear separation of roles

Speakers

– Thibaut Bruttin
– Lisa A. Hayes
– Bjorn Berge

Arguments

Different stakeholders should maintain distinct roles – governments should govern and legislate rather than sit alongside tech companies as equals


We are only going to succeed if we work together in whatever format, whether multi-stakeholder or sitting at your round tables, but if we work together to get the best ideas in one place


Multi-stakeholder approach to this. It’s not enough that governments sit and discuss this among themselves


Summary

Thibaut Bruttin argues against traditional multi-stakeholder approaches, believing governments should not sit as equals with tech companies but maintain distinct governing roles. Lisa Hayes and Bjorn Berge advocate for collaborative multi-stakeholder approaches where all parties work together as partners.


Topics

Legal and regulatory | Human rights


Role of media literacy in combating disinformation

Speakers

– Thibaut Bruttin
– Liisa Ly Pakosta
– Lubna Jaffery

Arguments

I’m afraid I disagree with media literacy as being one of the main solutions. I mean, the choices need to be made up front. We need to have a systemic change


Media literacy requires long-term perspective and teaching children to be intelligent users of information in the digital age


Supporting editorial-led media and ensuring media literacy are key responses to disinformation, helping young people understand social media limitations


Summary

Thibaut Bruttin dismisses media literacy as a main solution, arguing for systemic changes instead. Liisa Ly Pakosta and Lubna Jaffery view media literacy as essential, emphasizing long-term education and helping people understand information sources.


Topics

Sociocultural | Human rights


Approach to transparency requirements

Speakers

– Liisa Ly Pakosta
– Lisa A. Hayes

Arguments

Full transparency is essential for trust, allowing citizens to monitor who accesses their data and ensuring accountability


Meaningful transparency requires plain English explanations of algorithmic systems, regular reporting, and user control tools


Summary

Liisa Ly Pakosta advocates for complete transparency in all systems, rejecting business justifications for opacity. Lisa Hayes supports meaningful transparency but focuses on practical, understandable disclosures rather than full system transparency.


Topics

Human rights | Legal and regulatory


Unexpected differences

Effectiveness of automated content moderation

Speakers

– Lisa A. Hayes
– Thibaut Bruttin

Arguments

AI enables identification of harmful misinformation at scale, with 80% of removed content identified through automation and 98% removed proactively


Disinformation represents massive disruption of public conversation, with lies and propaganda spread intentionally by actors with tech company complicity


Explanation

Unexpectedly, the TikTok representative’s emphasis on successful automated moderation statistics directly contradicts the journalist advocate’s assertion that tech companies are complicit in spreading disinformation. This creates tension between industry claims of effectiveness and civil society criticism of inadequate response.


Topics

Human rights | Cybersecurity


Trust in democratic institutions and processes

Speakers

– Liisa Ly Pakosta
– Thibaut Bruttin

Arguments

Democratic countries, real NGOs, and innovative companies can work together effectively, but must be wary of authoritarian governments and compromised organizations


Different stakeholders should maintain distinct roles – governments should govern and legislate rather than sit alongside tech companies as equals


Explanation

While both speakers support democratic values, Liisa Ly Pakosta expresses optimism about collaboration between democratic actors, while Thibaut Bruttin shows skepticism about governments’ willingness to properly regulate, suggesting they may be too close to tech company interests.


Topics

Human rights | Legal and regulatory


Overall assessment

Summary

The main areas of disagreement center on governance approaches (multi-stakeholder vs. role separation), the effectiveness of media literacy vs. systemic change, and the extent of transparency required. There are also tensions between industry claims of effective self-regulation and civil society demands for stronger oversight.


Disagreement level

Moderate disagreement with significant implications. While speakers share common goals of protecting democracy and human rights, their different approaches to achieving these goals could lead to conflicting policy recommendations. The disagreements reflect deeper tensions between collaborative governance models and more regulatory approaches, which could impact the effectiveness of global responses to disinformation and platform governance.


Partial agreements

Partial agreements

Similar viewpoints

Both ministers emphasize the critical importance of media literacy education and supporting professional journalism as fundamental responses to disinformation, particularly for young people who need to understand the limitations of social media as an information source.

Speakers

– Lubna Jaffery
– Liisa Ly Pakosta

Arguments

Supporting editorial-led media and ensuring media literacy are key responses to disinformation, helping young people understand social media limitations


Media literacy requires long-term perspective and teaching children to be intelligent users of information in the digital age


Topics

Human rights | Sociocultural


Both speakers advocate for proactive approaches that promote quality information and journalism rather than focusing solely on content removal, while recognizing the need for better understanding of how information consumption patterns are changing.

Speakers

– Thibaut Bruttin
– Bjorn Berge

Arguments

Technology should be used to promote reliable information and reward quality journalism, not just remove bad content


More research is needed on youth behavior, particularly their reliance on influencers for information


Topics

Human rights | Sociocultural


Both speakers recognize that digital technology represents a fundamental cultural shift requiring new approaches to education and human development, emphasizing the need for people to actively shape rather than passively consume digital culture.

Speakers

– Lucio Adrian Ruiz
– Liisa Ly Pakosta

Arguments

Digital culture fundamentally changes human relationships, reality perception, and ethical frameworks – technology is not neutral but born with intention


Estonia is pioneering AI-driven education starting from elementary school to help people become better technology users


Topics

Sociocultural | Human rights


Takeaways

Key takeaways

Digital platforms have fundamentally transformed information ecosystems, creating both democratization of expression and dangerous fragmentation into echo chambers that undermine social cohesion


Disinformation represents an existential threat to democracy, with AI-enhanced campaigns becoming faster, cheaper, and more sophisticated, particularly targeting electoral processes


Technology is not neutral – it is designed with inherent intentions and creates new cultural and ethical frameworks that reshape human relationships and reality perception


Full transparency in algorithmic systems is essential for public trust, requiring accessible explanations, regular reporting, and user control mechanisms


Multi-stakeholder collaboration is crucial but stakeholders must maintain distinct roles – governments should govern and regulate rather than partner as equals with tech companies


Media literacy and education require long-term investment and must evolve to help citizens become intelligent users of digital information rather than passive consumers


The fight against disinformation must protect rather than suppress freedom of expression, balancing security needs with fundamental human rights


International cooperation through binding treaties and enforceable legal standards is necessary to address the global nature of information manipulation


Supporting independent journalism and editorial-led media is critical to maintaining reliable information sources in the digital age


Resolutions and action items

Continue partnerships between platforms and fact-checking organizations, with TikTok maintaining relationships with 20+ global fact-checkers across 100 markets


Implement AI-driven education programs starting from elementary school, as Estonia is pioneering from September 2024


Establish regional safety and youth advisory councils with independent experts to provide localized guidance to tech platforms


Develop and enforce the new Council of Europe international treaty on Artificial Intelligence and Human Rights, Democracy and Rule of Law


Create must-carry provisions requiring tech platforms to onboard and give prominence to verified news content


Implement media exemptions in content moderation policies to protect legitimate journalism covering sensitive topics like war


Develop new product features like TikTok’s footnotes system to provide community-driven context to content


Establish binding legal frameworks that treat digital space as public utility requiring democratic governance rather than private control


Unresolved issues

How to balance content moderation with freedom of expression, particularly regarding legitimate journalism covering sensitive topics like war


How to address the fundamental economic disruption of traditional media by tech platforms while maintaining press freedom


How to distinguish between legitimate democratic governments and authoritarian regimes in governance frameworks


How to handle the role of compromised NGOs and civil society organizations that may be funded by hostile state actors


How to scale meaningful transparency beyond technical disclosures to ensure public understanding of complex algorithmic systems


How to address the shift of advertising revenue from traditional media to tech platforms and its impact on journalism sustainability


How to develop effective international enforcement mechanisms for digital governance standards


How to balance innovation and safety in the global AI race without stifling technological advancement


How to address generational differences in digital literacy and information consumption patterns


Suggested compromises

Implement graduated transparency requirements that provide both technical details for experts and plain-language explanations for general users


Develop hybrid content moderation systems combining automated detection with human review and community input through features like footnotes


Create differentiated governance approaches that distinguish between democratic and authoritarian contexts while maintaining universal human rights standards


Establish sector-specific roles in multi-stakeholder governance where each stakeholder maintains distinct responsibilities rather than equal partnership


Balance platform accountability with innovation by requiring safety-by-design principles while allowing technological experimentation


Implement media exemptions in content policies while maintaining overall community guidelines and safety standards


Support both traditional editorial media and platform-based information sharing through complementary rather than competing approaches


Develop long-term media literacy programs while implementing immediate technical solutions for content verification and fact-checking


Thought provoking comments

Technology is not neutral. All technology is born with an intention. And after, we can apply another intention with the user using the technology. It is used to us to say that the technology is neutral, we can use it for good or for bad, but it is not. It is born with an intention.

Speaker

Monsignor Lucio Adrian Ruiz


Reason

This comment fundamentally challenges the widely accepted notion of technological neutrality, introducing a philosophical and ethical dimension that goes beyond typical policy discussions. It forces consideration of the intentionality embedded in technological design and development processes.


Impact

This shifted the conversation from reactive approaches (how to manage technology’s effects) to proactive considerations (understanding technology’s inherent purposes). It influenced subsequent speakers to consider the deeper anthropological and ethical implications of digital platforms, moving beyond surface-level regulatory discussions.


It’s a fantasy to believe that digital space is a private space run by tech companies. We have delegated it to them. It’s something that’s owned by the public to some extent. It’s a public utility, and we need to claim it. We need to restore democratic guarantees in the digital space.

Speaker

Thibaut Bruttin


Reason

This reframes the entire governance debate by challenging the fundamental assumption about ownership and control of digital spaces. It introduces the concept of digital platforms as public utilities rather than private enterprises, which has profound implications for regulation and accountability.


Impact

This comment created a conceptual turning point in the discussion, leading other speakers to address the balance between private enterprise and public interest. It influenced subsequent discussions about transparency, accountability, and the role of democratic institutions in digital governance.


We will come to grasp with everything that is happening with the new emerging technologies and I think that we will, as societies, benefit from them more than we’re harmed by them… This is not new. Every time we’ve had a major technological leap forward, we have had problems of misinformation and disinformation. We started with the printing press and we wound up with tabloids at the supermarkets.

Speaker

Lisa A. Hayes


Reason

This historical perspective provides crucial context that challenges the panic narrative around current digital challenges. By drawing parallels to previous technological disruptions, it offers a more measured view of current challenges while acknowledging their seriousness.


Impact

This comment helped balance the discussion by introducing historical perspective and cautious optimism. It influenced the tone of subsequent responses, encouraging speakers to consider both challenges and opportunities rather than focusing solely on threats and problems.


Estonia stands for full transparency… People can check from their mobile phones who has checked their data. You can go to the Internet and you have full picture of who has taken a look on your data… So we definitely stand for full transparency also by the media companies because what you very rightly said, this is a public space.

Speaker

Liisa Ly Pakosta


Reason

This provides a concrete, real-world example of how radical transparency can work in practice, moving beyond theoretical discussions to demonstrate practical implementation. It shows how trust can be built through transparency in digital governance.


Impact

This shifted the transparency discussion from abstract principles to concrete implementation strategies. It influenced other speakers to consider more specific and actionable approaches to transparency, moving the conversation toward practical solutions rather than theoretical frameworks.


I don’t believe so much in multi-stakeholderism. I mean, every time you put people around the table and they represent… Like, governments should not sit on the same side of a table as tech companies. I mean, governments are here to govern and legislators to make laws, and we should not too much mix all that.

Speaker

Thibaut Bruttin


Reason

This directly challenges one of the foundational principles of internet governance – the multi-stakeholder model – that underlies the entire IGF process. It introduces a critical perspective on power dynamics and the potential for regulatory capture.


Impact

This created tension in the discussion and forced other participants to defend or reconsider the multi-stakeholder approach. It led to more nuanced discussions about the roles and relationships between different stakeholders, influencing how subsequent speakers framed their collaborative approaches.


A part of strengthening resilience to disinformation campaign is an inclusive and a just society. This facilitates trust, stability, and the abilities of citizens to take part in an open and informed public discourse.

Speaker

Lubna Jaffery


Reason

This connects disinformation resilience to broader social justice issues, expanding the scope beyond technical solutions to include societal equity. It suggests that information integrity is fundamentally linked to social cohesion and justice.


Impact

This broadened the discussion beyond technical and regulatory approaches to include social and economic dimensions. It influenced subsequent speakers to consider the relationship between inequality, discrimination, and information vulnerability, adding depth to the conversation about building resilient societies.


Overall assessment

These key comments fundamentally shaped the discussion by introducing philosophical depth, challenging basic assumptions, and expanding the scope of analysis. Monsignor Ruiz’s comment about technology’s inherent intentionality and Bruttin’s characterization of digital space as public utility shifted the conversation from reactive policy discussions to fundamental questions about power, ownership, and democratic governance. Hayes’ historical perspective provided necessary balance, while Pakosta’s concrete example of Estonian transparency practices grounded theoretical discussions in practical reality. Bruttin’s challenge to multi-stakeholderism created productive tension that forced deeper examination of governance models. Together, these comments elevated the discussion from surface-level policy debates to fundamental questions about the relationship between technology, democracy, and society, creating a more nuanced and comprehensive exploration of the challenges facing digital governance.


Follow-up questions

How can we better understand human behavior, particularly young people’s information consumption patterns?

Speaker

Bjorn Berge


Explanation

He noted that young people increasingly turn to influencers for information (40% in US surveys) and called for more research on this trend and its implications for combating disinformation


What are the lessons learned from existing legislation like the EU Digital Services Act and how can enforcement be improved?

Speaker

Bjorn Berge


Explanation

He suggested reflecting on implementation experiences and considering whether new legislative approaches or conventions on disinformation might be needed


How can we develop new conventions or international agreements on disinformation and foreign influence?

Speaker

Bjorn Berge


Explanation

He mentioned this as a recommendation from a youth hackathon in Strasbourg, suggesting the need for binding international frameworks


How can algorithmic systems achieve full transparency, especially with machine learning technologies?

Speaker

Liisa Ly Pakosta


Explanation

She acknowledged the complexity of making machine learning systems fully transparent while maintaining her position that complete transparency is necessary for trust


How can we better define and distinguish between disinformation, misinformation, lies, and propaganda?

Speaker

Thibaut Bruttin


Explanation

He emphasized that these terms are often used loosely and need clearer definitions to address the problem effectively


How can we develop media exemptions that properly reflect the needs of journalism in content moderation?

Speaker

Thibaut Bruttin


Explanation

He highlighted the problem of legitimate war reporting being taken down due to graphic content, suggesting need for better policies that account for journalistic needs


What is the appropriate role and responsibility of advertisers in supporting democratic media ecosystems?

Speaker

Thibaut Bruttin


Explanation

He noted that 80% of advertising has shifted to tech companies and questioned whether this shift serves democratic interests


How can we ensure technologies do not reproduce or amplify existing inequalities and discrimination?

Speaker

Lubna Jaffery


Explanation

She cited examples of biased facial recognition and algorithmic systems, emphasizing this as crucial for building resilient societies


How can we better understand the anthropological and ethical implications of digital culture?

Speaker

Lucio Adrian Ruiz


Explanation

He emphasized that digital culture fundamentally changes human relationships and reality perception, requiring deeper research into these transformations


How can we develop more effective methods for promoting reliable, positive information to combat lies and disinformation?

Speaker

Bjorn Berge


Explanation

He called for creative approaches to elevate trustworthy information rather than just focusing on removing harmful content


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.