DC-PR & IRPC: Information Integrity – Human Rights & Platform Responsibilities

19 Dec 2024 10:15h - 11:45h

DC-PR & IRPC: Information Integrity – Human Rights & Platform Responsibilities

Session at a Glance

Summary

This discussion focused on information integrity in the digital age, exploring the challenges and potential solutions for maintaining a healthy information ecosystem online. Participants from various backgrounds discussed the role of platforms, governments, and civil society in addressing issues like misinformation, hate speech, and threats to democracy.

The conversation highlighted the need for a human rights-based and multi-stakeholder approach to digital platform governance. Speakers emphasized the importance of transparency, accountability, and user empowerment in platform policies. The discussion touched on the limitations of current approaches to information integrity, with some arguing for more structural reforms to address the root causes of online harms.

Participants discussed various initiatives and policies, including UNESCO’s guidelines for digital platform governance, Brazil’s efforts to combat disinformation, and international collaborations like the Global Digital Compact. The role of regulation was debated, with speakers noting the challenges of balancing freedom of expression with the need to address harmful content.

The discussion also explored the specific challenges of maintaining information integrity during elections, highlighting the need for coordination between platforms, electoral management bodies, and civil society. Speakers stressed the importance of context-specific approaches and the need for ongoing assessment and adaptation of policies.

Overall, the discussion underscored the complexity of achieving information integrity in the digital age and the need for collaborative, multifaceted approaches that involve all stakeholders in the digital ecosystem.

Keypoints

Major discussion points:

– The concept of information integrity and its importance for democracy and human rights

– The role of platforms, governments, and civil society in regulating online content and combating misinformation

– Challenges around elections and the spread of disinformation on social media

– The need for structural reforms to address root causes of misinformation, not just symptom-level solutions

– Balancing freedom of expression with content moderation and platform accountability

The overall purpose of the discussion was to explore information integrity from the intersection of human rights and platform responsibilities, examining challenges and evaluating strategies to uphold human rights principles in the digital landscape.

The tone of the discussion was largely academic and policy-oriented, with speakers presenting research findings and policy proposals. There was a sense of urgency around addressing misinformation, but also caution about overly restrictive approaches. The tone became slightly more speculative and uncertain when discussing emerging decentralized platforms near the end.

Speakers

– Yasmin Curzi: Law professor at FGV Law School, Digitally Vargas Foundation in Rio de Janeiro; Postdoctoral research associate at the Karsh Institute of Democracy, University of Virginia; Coordinator of the Dynamic Coalition on Platform Responsibility

– Dennis Redeker: Co-chair of the Internet Rights and Principles Coalition; Researcher at the University of Bremen

– Tapani Tarvainen: Moderator

– Ana Cristina Ruelas: Works for UNESCO on freedom of expression in the digital environment; Former member of Article 19 in Mexico and Central America

– Maira Sato: Representative from Brazil’s Secretariat for Digital Policies of the Presidency of the Republic

– Marrin Muhammed: Researcher

– Ramon Costa: Moderator

Additional speakers:

– Julia: Researcher at J.W. Vargas Foundation (audience member who asked a question)

Full session report

Information Integrity in the Digital Age: Challenges and Solutions

This discussion explored the complex landscape of information integrity in the digital era, focusing on challenges faced by platforms, governments, and civil society in maintaining a healthy online information ecosystem. Experts from various fields examined the intersections of human rights, platform responsibilities, and democratic processes in the digital realm.

Key Themes and Discussions

1. Multi-stakeholder Approach and International Initiatives

A central theme was the necessity of a multi-stakeholder approach to addressing information integrity issues. Ana Cristina Ruelas emphasized the importance of involving governments, platforms, civil society, and academics in developing solutions. Maira Sato highlighted Brazil’s active role in promoting information integrity, including its inclusion in the G20’s agenda and the country’s adherence to the OECD recommendation on information integrity. Sato also discussed Brazil’s plans to launch a national chapter of the Global Initiative for Information Integrity on Climate Change, a collaboration with the United Nations and UNESCO.

2. Platform Responsibility and Accountability

Speakers unanimously agreed on the need for greater transparency and accountability from digital platforms, particularly during elections. Ana Cristina Ruelas stressed the importance of platforms being transparent about their content moderation practices and risk assessments. The discussion noted a shift from content-focused strategies to process-focused ones in regulatory approaches.

3. Structural Reforms and Root Causes

Marine Muhammed argued that current regulatory approaches fail to address the root causes of information integrity issues, contending that platform business models and techno-design architecture foster a hostile digital environment. Yasmin Curzi advocated for policies to decentralize media monopolies across the broader media landscape.

4. Challenges in Regulating Digital Platforms

The moderator, Tapani Tarvainen, raised the unexpected point of difficulties in regulating decentralized platforms like Mastodon, highlighting the challenges of applying traditional regulatory approaches to new technological structures. The discussion touched on balancing freedom of expression with content moderation, particularly in decentralized environments.

5. Information Integrity During Elections

A significant portion of the discussion focused on maintaining information integrity during elections. Speakers stressed the importance of coordination between platforms, electoral management bodies, and civil society. An audience member raised questions about assessing platform performance during elections globally and identifying unacceptable outcomes in electoral processes related to social media.

6. Gender and Information Integrity

Yasmin Curzi discussed her article on Gender Information Integrity, highlighting the importance of feminist approaches in addressing information integrity issues. This perspective emphasized the need for diverse participation in content production and moderation.

7. Research and Global Perspectives

Dennis Redeker’s research on social media attitudes across 41 countries was mentioned, providing insights into global perspectives on platform regulation and information integrity. The role of UNESCO in managing a global fund to support research and communication projects on information integrity was also discussed.

8. Collaboration Between Platforms and Regulators

The Australian eSafety Commissioner was cited as an example of successful collaboration between platforms and regulators, demonstrating the potential for effective partnerships in addressing information integrity challenges.

Conclusion and Future Directions

The discussion underscored the complexity of achieving information integrity in the digital age and the need for collaborative, multifaceted approaches. Key takeaways included the need for structural reforms, the importance of human rights and feminist frameworks, and the crucial role of platform transparency and accountability.

Unresolved issues and areas for further exploration included:

– Effective regulation of decentralized platforms

– Balancing freedom of expression with content moderation

– Scaling up alternative, decentralized communication platforms while addressing moderation challenges

– Finding political will to implement structural reforms of platform business models

The conversation concluded with a call for continued development of guidelines and principles for digital platform governance, emphasizing the need for ongoing dialogue, research, and international cooperation to address the evolving challenges of information integrity in the digital age.

Session Transcript

Yasmin Curzi: so we don’t have any more delays. Thank you all for your patience. My name is Yasmin Curzi. I’m a law professor at FGV Law School, Digitally Vargas Foundation at Rio de Janeiro, postdoctoral research associate at the Karsh Institute of Democracy at the University of Virginia. The session is organized alongside my colleague, Dennis Hedeker from the Dynamic Coalition on Internet Rights and Principles. It was an idea, a joint session with the CPR, the Dynamic Coalition on Platform Responsibility, which I coordinate with Professor Luca Belli as well. So as we all know, digital platforms has radically transformed how we engage with information, ideas, and each other. And with these advancements come critical challenges, the spread of disinformation, misinformation, hate speech, and threats to democracy, our concerns that demand our attention, the idea of information integrity has emerged as a framework to address these issues, but it remains an evolving concept that lacks a solid, unified theoretical framework. This session today seeks to bridge this gap by exploring information integrity from the intersection of human rights and platform responsibilities. We just published the CPR outcome. It is available at the Internet Governance Forum website. I can put the link here in the chat later for any of you who have interest on this. Basically trying to explore this gap with the information integrity literature that we think that could draw more on the platform responsibilities scholarship. So we propose this session. to examine the responsibility of social media platforms over online content and governmental and international strategies to ensure and uphold human rights principles. Our goal is not only to discuss the challenges but also to evaluate and support the ongoing activities of all stakeholders involved in this debate and involved in advancing human rights and inclusion in the digital landscape. And that’s the idea of our session. I hope we can all engage here. You can raise questions. We have a time for Q&A after the speaker’s presentation. I’d like to thank you all again for being here. Thank you all for your patience with this little delay in starting the session. Now I’d like to invite my colleague, Dennis Heidecker for his presentation as well and presentation of the IRPC, our partner here today. Thank you so much once again.

Dennis Redeker: Thank you very much, Yasmin. Thank you everyone for being here, the speakers, the onsite moderators and obviously the technical team. Indeed, we just here proposed this joint session to bring together the knowledge of two dynamic coalitions and next to the platform responsibility, the dynamic coalition that Yasmin just introduced, the Internet Rights and Principles Coalition, founded more than 15 years ago in 2009, in 2008, sorry, is a dynamic coalition that deals with today also with questions of platforms. So the Internet Rights and Principles indeed are also platform rights and principles and the entire discourse on platform responsibilities, I would say has taken a turn into the question of human rights as a standard for platform actions. platform policies and for ways of negotiating also between different jurisdictions. The Internet Rights and Principles Coalition works in this field and applies human rights standards to all kinds of digital technologies, whereas the charter that was written about 15 years ago talks specifically and translates the Universal Declaration of Human Rights into the age of the Internet. This definitely applies to platforms, to other kinds of technologies, and that’s, I think, a fruitful engagement with these kind of documents, with the charter, with the UDHR, the Universal Declaration of Human Rights, obviously, and with platform rules and platform responsibilities. So we look forward to this discussion today. Thank you for everyone to join today.

Tapani Tarvainen: So our next speaker here would be Ana Ruelas. who is working for UNESCO in the work on freedom of expression in the digital environment, who was formerly in Article 19 in Mexico and Central America, and is certainly very qualified to speak on these topics. So I just hand over to you, Ana.

Ana Cristina Ruelas: Thank you very much. And thank you very much, Jasmin, for the invitation. It has been great following up this discussion since last year. Well, I think a little bit more, but last year we also had this conversation. And I think that it’s good how it has evolved over the last year and how good it is to see a lot of people more engaging into a more, let’s say, human rights-based and multi-stakeholder conversation over information integrity. Many things have happened in the last year. Primarily, I think the most important thing inside of the UN is the approval of the Global Digital Compact that actually talks about the importance of enhancing information integrity and allowing people to have access to reliable information in the digital sphere. And I think that I will start with that, because UNESCO started this conversation a long time ago when we released the Rome Principles and said that any kind of discussion around Internet governance should be done through a human rights-based and multi-stakeholder approach. And after that, we started a huge discussion in order to try to identify how should we balance freedom of expression and access to information at the same time that we’re dealing with the governance of digital platforms. And this discussion led to the guidelines for the governance of digital platforms that we believe it is a document that helps understand different stakeholder groups, how to meaningfully engage in digital platform governance. and try to make sure that the same time that governments fulfill their responsibilities to enable the environment for freedom of expression by refraining to shut downs, by refraining to not being transparent about the type of requirements that they place upon digital platforms, by making sure that they are doing a step forwards to close the digital divide by ensuring journalist protection and enhancing media viability. We also have the responsibility of digital platforms to comply with five key principles. And those five key principles are very much related to what is that we should expect that should happen in order to ensure that we promote the integrity of information. The first principle relate to the responsibility for being transparent. It is not only about transparency related to how they are putting in place their operations, their community guidelines. It’s also about how they are dealing with content. They have the possibility and the responsiveness for platforms to explain their users on how they are taking decisions over their content and how is that they are moderating or creating content. Also to have the possibility to respond to users on what are the measures they are taking to ensure that while they design, deploy it and use different products and services, they’re actually being careful about the type of risks that these different products and services could entail. Primarily in the specific context, for instance, in elections or during crisis and armed conflict but also when there’s a change of operations or where there’s a case of to protect the specific people that are. critical to encase the freedoms in the digital sphere, such as journalists, human rights defenders, artists, scientists, et cetera. And the other thing that the guideline says that there’s very important to create control or to create tools from the governance systems for the users and non-users to have control over their own content and make their own decisions. So this is also important because although there’s a lot of actions related now to strengthen the integrity of information, I do think that it is very important that we recognize the need to create for different users, primarily the most vulnerable and marginalized users, to make sure that they can create their own content that then can create counter narratives to those that are disseminating this information on hate speech. And they can also take control over their engagement with the different digital services and products. The Global Digital Compact says something very important that relates to how to ensure information integrity that UNESCO is trying to move forward with along with different stakeholders, is that multi-stakeholder networks. We acknowledge that this is not only a thing that has to happen between regulatory authorities and regulated entities, but to have also the participation of civil society actors, academics, journalists, in the discussion is very, very important. So for that, UNESCO has been strengthening the possibility to have a global forum of networks of regulatory authorities, engaging with a group of think tanks and research centers that would serve as a trust for the implementation of the guidelines, which is very much related to what the Global Digital Compact says on the importance of integrating into this discussion those that are actually implementing the rules that are being approved by the legislative authorities. So for us right now, the main… the way forward to enhance information integrity relates to creating awareness on the human-based, human rights-based and multi-stakeholder approach of digital platform awareness that entails what I already said. On the other side, the strengthening of multi-stakeholder networks that acknowledge the importance of bringing together different stakeholders, not as just one role for each one, but to actually participate together in the definition and implementation and evaluation of policies and to make sure that regulatory authorities have the specific capacities to deal with the new problems related to information integrity. So I will leave it like that and then we can continue the conversation. Thank you very much.

Moderator: Okay, I’m on here. I think you can see me, Ana, but I’m here in person. Well, can you hear me? No. I hear you, but I cannot see you. I don’t know why. I think, oh no, they showed the video now. Actually, I tried turning on the video, but for some reason, I cannot stop. Can you hear me now? Okay. Yes, I can hear you. Yeah. Okay, I hope you can see me following the session, but for a start, I want to say thank you, especially for inviting me. It’s a real pleasure to share this session with you and it’s really incredible. Listen, Cristina Velas, she’s a very important specialist that I think that has many important topics to bring to us. So let me just see here. Thank you, Ana and Cristina. I think we have a very specific work from UNESCO in producing knowledge about how human rights are are challenged by new technologies, and then it’s our economic and social challenges. But now I have to ask for, I’m not listening myself well, just, can you hear me well? Okay, my microphone just cut my voice, and this is just, but okay, let’s go. Okay, now it’s better, oh, so much better. Well, I would like to ask for Marie, one of the speakers to, she could share your contributions about the global south, please.

Marrin Muhammed: Thank you, thank you, everyone. Hi, am I audible? Okay. Yes. Yeah, thank you so much for having me as part of this panel and for this important conversation. And I think in my intervention, I would like to talk about the limitations, some of the limitations that we feel of the dominant framing of the term information integrity and the current approaches that we have seen so far to strengthen information integrity. So information integrity is a much discussed as well as a much contested term. Initially defined narrowly in terms of accuracy, consistency, and reliability, the concept or the term has now evolved. And now when we look at the global, UN Global Principles of Information Integrity, which was released in June, I think of this year, we can see that the focus is no longer primarily on the integrity of the information as such, but the integrity of the information ecosystem. And this is a welcome and necessary change because many have been pointing out how the earlier framing of information integrity in terms of just. accuracy, consistency and reliability tend to neglect the broader ecosystem of actors and the economic, political and social factors that shape the production, legitimization, dissemination and consumption of information. So now, while we have a broader framing of information integrity now, we still need to deliberate on what it means in actual concrete terms, so that it does not become another buzzword. So the UN Global Principle describe information integrity as, and even in the GDC, when you see GDC, you can see the similar language as fostering a pluralistic information space, one that enables trust, knowledge and individual choice. And no one can dispute that these are important values to achieve. But the question is, what do they mean in practical terms? Like what kind of pluralism are we talking about? How can we remove the barriers to exercising individual choice? What is it required to build trust in diverse contexts? So these questions demand careful consideration, especially grounded in the unique complexities and challenges faced by different regions. In other words, while it is useful to have an overarching set of principles, for any real results, these principles have to be infused with meaning by the collectivist visions of what information ecosystem should entail in each local or regional context. For example, the dominant, some of the discourses that we see, it simplifies information integrity to a binary of trustworthy versus untrustworthy information. The assumption that just by providing trustworthy and accurate and plural information will automatically empower citizens, ignore the complexities of communication dynamics and public trust. It also overlooks advanced ways in which diverse population perceive, interpret and engage with information. So I think a meaningful articulation of information integrity must address the informational needs of the people in their regional, social, cultural context, identify the barriers that prevent them from engaging meaningfully, and ensure their ability to participate in public deliberations. So the dominant framing of information integrity, which focuses primarily on supply side of the information that is ensuring accuracy, reliability, trustworthiness, it can be in adequate. The information integrity debate should also grapple with individual’s ability to generate ideas and opinions, express and speak, and most importantly, have their voices heard and understood. Because in many regions, there are populations who have been who have faced criminalization and censorship, and they have, and as a result, they have had limited opportunity to produce information. Now, another point that I want to raise is the limitations of some of the limitations of the current dominant approaches to strengthening information integrity. The UN Global Principles on Information Integrity may have adopted a broader framing of the concept. Yet, the recommended responses do not seem promising enough to bring forth the structural changes in the information ecosystem, as it relies heavily on state based regulations and goodwill and voluntary goodwill of technology companies. It’s true that the app, you know, approaches the measures, platform regulation measures like fact checking, improving content moderation, platform transparency, these are very important and crucial and must be implemented with great vigor, and can go a long way in improving the aspects of information ecosystem. However, they tend to be symptomatic remedies, in the sense of mitigating harms. They operate within the confines of the surveillance capitalist paradigm, ultimately reinforcing big tech’s gatekeeping powers. without addressing the underlying structural issues. So these measures have to be accompanied with some structural reforms also. Further, even when we talk about measures to enhance competition among digital platforms and services, such as by mandating interoperability and data portability, these are significant, they should be carried forward. But even these measures, I am not sure if they guarantee a shift away from the extractive surveillance-oriented mode of Q-dating conversations that we have now. Even measures to give users more control over the content that they view, I’m not sure how successful that will be given the polarized environment where entrenched platform designs make it challenging to exercise meaningful choice even for tech-savvy users. So I’m not sure if there was a major section of the population who may not have digital literacy and may struggle to make such meaningful technological choices. And they still rely on messaging apps like WhatsApp to get basic information or even get welfare services. So the user empowerment measures, I’m not sure how much successful it will be to safeguard the information, to strengthen information integrity. So the liberal regulatory playbook of supplier-focused and consumer-focused remedies to restoring health and vibrancy of the public sphere does not address the root cause of the problem, which is the business model of the platforms, the techno-design architecture that foster the hostile digital environment. So challenging this paradigm needs structural changes to social media, platform architecture, and not just procedural rules to mitigate specific harms. So we must reimagine how public discourse is organized online, placing values of truth and democratic integrity at the center. But yet these values of… truth and democratic integrity seems incompatible with the surveillance capitalist model that we follow today. So hence, to safeguard information integrity, it is vital to make a shift away from the extractive business model of online platforms. Even the UN global principles acknowledge this imperative, but stop short of mandating real action, leaving it to platforms’ voluntary revaluation of their models. This is insufficient. We need structural reforms, we need legislations to enforce these structural reforms through legal and policy measures. There have been many suggestions of how such structural reforms can be brought about. These measures could include imposing a statutory duty of care on platform owners for addressing the individual and societal harm stemming from their business model. Some even suggest a call for ex-ante licensing of platforms akin to pharmaceutical drugs, given their societal impact. Regulations could also target platforms’ reliance on attention-driven algorithms. So instead of prioritizing just relevance, platforms should be required by law to promote diverse, serendipitous, value-sensitive content tailored to different contexts. Regulations could even, I mean, it could even require platforms to change their structures from profit-seeking to non-profit model. I mean, so we need such bold solutions, I think, to really strike at the root cause of the problem. And beyond regulating big tech, there are other measures that we should also think about, like fostering diverse alternative media and communication platforms. There’s a strong case to be made for public service media, community-driven initiatives, with a civic mission to provide citizens with a pluralistic, global view of the world. And this would require public funding and policy support. Then reviving journalism, local journalism, but it’s very important. It’s essential for compacting disinformation, informing public deliberation, and to restore trust in the community. community. The government can play a role in this by providing sustained financial support, implementing revenue sharing models between platforms and news publishers, and journalism, the media also should think about alternative business models to ensure their independence and survival in this changed landscape. So to conclude, I would just like to say that addressing the chaos in today’s information landscape demands structural reforms that target platform business models, the incentive structure, the techno design architecture. And along with that, we need measures which would, you know, we should have a positive vision of what an information ecosystem should look like and take measures which would create incentives for truth and public deliberation, which will satisfy the information needs of the people and remove the barriers that people face in meaningfully engaging with the information that they receive to respond to the information.

Yasmin Curzi: Yeah, that’s all, thank you. Thank you, Marine. That was a very eloquent presentation of the root cause of the problem and even some good ideas on how to do that. Although the one thing that you are still missing is how to find the political will to do that. But of course, talking here is one way to push exactly that. So maybe there is hope. I will not delve more into that. But next, we’re supposed to have Samara Castro, but I understand she’s not present. But somebody from my office is. Is that correct? Yes. Yes. It’s Maira Sato. Okay, Maira, if you please, go on.

Maira Sato: Hi, hello. I don’t know if you can hear me properly. We can hear you. Yes. We just do not see you, right? Well, Mara, I think you can start and then they can solve it later. Thank you so much for your patience. Oh, yes, I was having a problem to open my mic to again, well, thank you. Good morning. Thank you very much for inviting Brazil to take part in this debate. As I said, I’m actually replacing our Director for Freedom of Speech, Samara. She had a last minute incident and could not join us here today. I didn’t have much time to prepare my intervention here, but I hope I can contribute to the debate. Well, first of all, I’d like to say that my institution, the Secretariat for Digital Policies of the Presidency of the Republic, is a new structure in the Brazilian government. And it was built in Lula’s government when it became very clear for us that we needed to have a digital policy to deal with the issues of disinformation, misinformation, and all the new challenges that the digital environment imposed to the information ecosystems. We had, as you may know, several important cases of disinformation that had a severe impact in the elections, threatening our democracy and also our policies, including health policies, for example, during the COVID-19 pandemic, when the spread of anti-vaccine disinformation affected our health policies and vaccination levels. So our Secretariat started its work focused on fighting disinformation. Our government established by decree the Procuratorate for the Defense of Democracy, and we designed institutional channels to ensure that the information and analysis of cases of disinformation that threatened the democratic system and the implementation of federal policies would be duly… followed by the governmental institutions responsible for that, but besides doing its work in monitoring and informing the relevant institutions on cases of disinformation, our Secretariat also started some innovative programs. We actually considered to be a kind of pilot to create and test new approaches to deal with disinformation applied to concrete sectors. So, first of all, we decided that we would focus our attention in the health sector, considering the serious impact that disinformation had to the decrease of the vaccination levels in Brazil. And just as an example, in 2021, we reached the lowest levels of vaccination in Brazil in decades, and similar to what we had in the 80s, for example. And this program that we launched, called Health with Science, was launched one year ago and is supporting our Ministry of Health to recover the levels of vaccination in Brazil. The program acts through several pillars of action, monitoring of disinformation narratives, communication strategies, investigation and accountability, institutional partnerships and capacity building. So, it aims to promote an integrated approach to deal with disinformation. And in parallel, our Secretariat was also actively contributing to the international debate on information integrity. This is because we understood that the concept of information integrity allows us to have a positive and propositive approach to deal with the new digital information ecosystem and the challenges that it imposes to the public debate. And we understand that this is essential to the functioning of democracies and the right to have access to reliable, accurate and evidence-based information. So, with this understanding that information integrity, what we need to do is to have a more inclusive approach to deal with disinformation narratives, and to have a more inclusive approach the concept that allows us to have a propositive and integrated approach to the new environment, digital information environment. Brazil has been very active to include this concept nationally and internationally in the debate. So, as Brazil was in the presidency of the G20 last year, this year actually, until November, we worked to include this issue, the issue of information integrity in the G20’s agenda and we managed to include an item on information integrity in the G20’s working group on digital economy. That was the first time that the G20 countries committed to act to promote information integrity in the digital space. So, we think it’s an important step. We also contributed to the discussions on information integrity in New York during the negotiations of the global digital compact and we were happy to include a separate section on information integrity in the GDC. Recently, very recently actually this week,

Yasmin Curzi: we also adhered to the OECD recommendation on information integrity. And now in the G20 summit last November, we officially launched with the United Nations and UNESCO the global initiative for the information integrity on climate change. The idea again is to focus our attention on how to implement this concept, how to personalize this concept in a specific sector, which is not so specific, so specialized, because of course it involves the whole society, but in any case it’s a concrete implementation of the concept. This is our aim. And the idea of this initiative is to join forces between governments, international organizations, and civil society organizations to promote the information integrity on climate change through a global fund to be managed by UNESCO. The fund will support research projects, strategic communication projects, and reinforce existing communication campaigns. The initiative also aims at promoting the debate on information integrity. at the institutional level and in the international agenda, including the Conference of Parties of the UNFCCC. And we are now preparing to launch the Brazilian chapter of the Global Initiative. The Brazilian chapter will be our national implementation of the Global Initiative for Information Integrity on Climate Change. And our intention is to create through the experience that we have achieved with our Health with Science program, also an integrated approach to promote information integrity on climate change, and talking with different actors, including the civil society and mobilizing the private sector to be a very inclusive process, and the government, of course. And to be faithful to the concept of information integrity as defined in the UN’s global principles on information integrity, we wish to adopt a broad perspective and focus our attention in different elements that support information integrity, while understanding that regulation matters need to be treated a bit separately, as they don’t depend exclusively on the executive power of the federal government. In this context, we are focusing our attention to areas such as research, communication strategy, accountability, support to journalism, also media environmental education, and positive incentives for information integrity. In this last pillar, we also wish to work with, to foster a national coalition of advertisers for the information integrity, which is something I think is very important, because it also deals with some of the structural issues of how advertising and publicity works in the digital space. But it is also important to mention that even though in the Brazilian chapter of the global initiative we are not focusing our attention on regulation I think Maria is offline right now. Something happened with her connection maybe. Maria? The sound. I think the mic is off again. Sorry about this. No problem. I was just saying that even though our actions here in the executive power are not focused primarily on regulation because we know that regulation depends on the other powers of the government which are not in our power to decide

Maira Sato: we are trying to have an active participation also in the discussions on regulation that are taking place in Brazil’s national congress. Recently we had an important approval by our senate of our law on artificial intelligence. This law is not finally approved because it still needs to be approved by the chamber of deputies. But it’s an important document in which we managed to establish a governance for AI systems with a regulatory agency according to a scale of AI systems categorized by levels of risks. And with a special focus on human rights and with different due diligence responsibilities. And we also managed to ensure that the use of work subject… to the rights of author to train AI systems will be paid to the right owners. And moreover, we managed to include also the term information integrity in the text. So we are trying also to foster this debate on information integrity here nationally. So in all these actions, our aim is to promote and to develop the concept of information integrity nationally and internationally, and to test different approaches on how to concretely operationalize it as a public policy. So I think that’s it, what I had to say here. Thank you.

Ramon Costa: Hello. Can you hear me? Wow. I can hear you, but I’m going to read some points that I note when we’re talking, so the G20’s digital economy working group under Brazil’s president achieved a landmark consensus of promoting information integrity, where we have for the first time a multilateral declaration addressing this crucial issue, highlighting its impact on political and economic stability. Four key areas were identified, significant connectivity digital governance, artificial intelligence, and information integrity, the central theme of your session. Brazil champions a comprehensive approach, balancing rights and promoting transparency for initiatives like Brazil Against Fake News and Comunica BR. The G20’s acknowledgement of the far-reaching effects of disinformation and hate speech underscores the significance of this ACHIEVE meeting, paving the way for global cooperation to combat misinformation and force a resilient information ecosystem. So, these are some points that I wrote when Maida was speaking because I think we have a really good experience in Brazil with G20 and I think Maida, it’s very important to have you here to share this contribution with us. So, thank you so much. But now, I’d like to ask Professor Yasmin Curzi to share her contributions. I admire Yasmin’s work very much. I heard her article this week, Gender Information Integrity, and that’s really good. And Yasmin, please, the floor is yours.

Yasmin Curzi: Thank you so much, Ramon. So, the article that Ramon mentioned, it was an invitation by the IT for Change. I thank you so much, Mary, for inviting me to contribute to Botafogo. I’m putting the link here for any of you who have interest and also following this discussion. But what we are proposing, the idea of this article that Mary invited me to contribute with was discussing information integrity through a gender approach, the gender dimension within information integrity. Basically, what I’m trying to link here is how feminist scholarship can contribute to the debate on platform responsibilities and information integrity. Some lessons that… the feminist scholarship could actually be used to be utilized to to inform policymaking on information integrity. I’m being really brief here because I think we need us we still need to have time for Dennis to speak and also to have the Q&A but basically the idea here of the this feminist approach to information integrity is basically two proposals that I think that are central here is the feminist scholarship we have really to address inequity, to address inequality in the in the roots of the cause. So as Mary’s presentation highlighted, we have structural issues regarding information integrity that relates specifically with the media pluralism ecosystem, the lack of media pluralism in the informational ecosystem and the power dynamics that are unequal not only in Brazil but in other in all the countries actually. We need policies to actually decentralize media monopoly. I’m not only talking about the the big text and specifically but also media in general, the TV channels and etc. We need actions and policymaking to enable to foster media diversity in this sense and another lesson from the feminist theory from the feminist scholarship that could also help to inform information integrity debate is to enable more participation in this space, enable more participation from local and regional initiatives to actually bridge this, to bring more diversity. not only to the policymaking, the participation processes, but also to content producing and to moderation. And another thing that feminist theories, feminist activists and scholars have been highlighting, we need, platforms need to open themselves to actually learn with their experiences and the research that feminist scholars and activists have been producing, civil society has been producing reports showcasing how online harassment and coordinated campaigns have been targeting specifically women and LGBT plus individuals who have positions that break the status quo, that dispute the status quo. And this decreases significantly, in fact significantly, the democracy and political participation of minorities. So to address this, they need to improve their content moderation policies. It is a discussion and a point that has been highlighted within the platform responsibilities literature for a long time, but again, as Mary, as Ana Cristina and the other speakers have been highlighting here, we need to address platform’s monopoly. We need to try to talk to these actors and actually make them engage more with human rights in an active stance and not only promoting truth in the internet, but also tackling how the algorithmic ecosystem, how the algorithmic content recommendation systems actually promote hate speech, but it relates to their, the economy attention, relates to… how they profit over this. So we need to actually look at the root of the problem to be able to create efficient solutions in this. These were the ideas that I tried to highlight in this article. Thank you for your time and patience here. I will pass the word now to, pass the floor now to Dennis Hatteker so he can also speak about his research. Thank you so much.

Dennis Redeker: Thank you, Yasmin and thank you everyone already. This was a already a fantastic discussion and I really appreciate different perspectives that we bring together here in this session. I think this is the spirit of the IGF, right? This is the multi-stakeholder perspectives and we just heard some, some, well, Yasmin, some researchers perspectives too. Let me share some research that is very well, very, very fitting to this topic conducted at the University of Bremen. So I have a bit of my academic hat on here in addition to, yes, I think this is, in addition to my hat as the co-chair of the International Rights and Principles Coalition. Can you see the slides of my presentation? I wanted to, I wanted to provide a perspective from a recent 41 country comparative study that we conducted, asking people about their attitudes toward social media platforms. And a number of questions relate very directly, I think, to the question of information integrity, even though the discourse on information integrity that has now become a global discourse. And thank you so much for Brazil and others and UNESCO to championing these efforts also on the state and interstate level. The research was done in 41 countries, as I said, mostly in the global South and Eastern Europe was conducted at the University of Bremen. And the method was essentially web-based online survey with a survey questionnaire in six languages in the end of 2022 and until early 2023. The sample includes about 17,500 people from these countries. Some samples are a little bit better. Some are not as great. We can talk about that. But that’s just as the background. What have the respondents, so those users of Facebook and Instagram, responded to when I asked them to what extent misinformation is a concern to them? It’s one of the questions I had. And I can tell one thing. This is a scale from one to five. And the average is quite high with almost four. So quite a number of people are concerned about misinformation around the world. But it differs according to where you are and which country you live. And I think it differs by a number of other characteristics, too. Why is this important is because we often see this debate on information integrity. I think it’s a global one. It’s a very important one. But it applies, perhaps, in different nuances to different places to different people. We’ve just heard the feminist there. Sorry to interrupt, but there’s some very disturbing new sound right now. I’m not sure where it’s coming from. It looks like it’s still mic or something. Let me double check. Background noise, as it were. Yeah, there’s no background noise here. But I can speak a little bit closer. All right, I’ll do that. I’ll do that now. It’s already the computer. So the comparison here from the survey analyzing by country, we see that, for example, people in Poland have most concern for misinformation whereas people in Haiti have the least concern. There’s a bit of, I think, a global north-south. um tendency here but it’s not not perfect by by far switzerland the only western european country in the sample somewhere in the middle and we do have countries from sub-saharan africa on top of the list including ghana and nigeria um misinformation is the one thing um and but the other thing obviously is um and that’s that’s a question that we wanted to bring in as the rights and principles coalition is to what extent um human rights um are now better protected in the times of social media um and so one question that um that i asked uh was whether people uh thought that since the event of social media uh the following five human rights um have been better or worse protected so has the protection of human rights in other words decreased increased or remain the same and you see the the legend on the right um again this is with seven and a half thousand people uh in a questionnaire um and what we can see here and that’s i think quite interesting is that um and we have to remember that perhaps in spite of hate speech and misinformation the internet is a force to help um to protect the human right to information uh access to information uh people say a large majority that it has increased the protection of that right the protection of the right to freedom of expression has increased two thirds say that there’s some people say a decrease there’s some people who say that state the same equality um it’s interesting and that relates perhaps to mary’s talk also coming with a social justice perspective you know what does it mean for equality that we have these social media platforms uh it’s a split a little bit here so some increase some decrease but most people say it’s the same or has remained the same um life and liberty There’s also quite a number of people, about a third who say since the advent of social media, that protection of that right has decreased. And I think if we talk, think about hate speech specifically, that might actually be one of the drivers of that. There’s also a strong correlation between that answer and a concern for hate speech by those respondents. I don’t have that data with me now. But I think also strikingly, and not surprisingly, the right to privacy is seen by those respondents in those 41 countries to have decreased overwhelmingly, and only some people think that social media has actually increased that protection of that right, or it remained the same. These are just some insights from a survey that hopefully also helps us to contextualize and helps us to, on the one hand, in a way, see, at least in this slide, I would say that some of the rights, particularly the right to information, freedom of expression, obviously, seem to have increased from a human rights perspective. That’s positive. But people have concerns, not just about hate speech, but as I have demonstrated, also a high level of concerns about misinformation. And I think this is something that also the survey shows is that all governments and other actors need to work together on this to realize information integrity across all levels and in all countries. Thank you.

Tapani Tarvainen: Thank you, Dennis. I don’t think we have any prepared speeches left. So should we switch to questions from the audience? Anybody in the room? I think we have questions online as well. Can you read them? Yes, just a minute. I’m muting the

Audience: Hi, can you hear me? Yes, yes. Okay, so thanks everyone for everyone’s speech was very rich. I speak from Brazil. My name is Julia. I’m a researcher at the J. W. Vargas Foundation. And I have a question for everyone. It’s regarding 2024 municipal elections. Since we’re witnessing the rise of influencers across various demands, such as politics, wars, and even in institutional roles on like law enforcement, which are integrating social media into their work. I was wondering about, of course, in my perspective from the Brazilian municipal elections, but also I would like to hear everyone’s call on that. Because there was a noticeable lack of platform responsibility and accountability in moderating harmful content here, specifically including criminally targeting opponents reputation, for example. And this highlights how critical moments elections can be, and remains a major challenge in the digital age. So I would like to hear everyone’s call, if possible. On that given content, how do you assess the performance of social media platforms during the 2024 municipal elections globally? What are the most unacceptable outcomes that you should that you wouldn’t wouldn’t like to see repeated in future of electoral processes? And finally, what do you think are the civil society’s roles in ensuring those unacceptable outcomes? are identified, addressed and taken seriously by platforms. Thank you very much.

Tapani Tarvainen: So, who wants to take on that?

Maira Sato: I can talk a little bit, as I’m here from Brazil also. Yes, we had some incidents in Brazil that were very serious and very damaging to the elections. Just one day before the municipal elections, for example, in São Paulo, we had an influencer who was spreading misinformation regarding the main opponent. And also, in the second round, we had this too. This is a problem that is a structural problem we are dealing with in Brazil. And I think, as we mentioned here, we need to have accountability and we need to have transparency on how… And we have to regulate how social platforms deal with this kind of content. In Brazil, we think that the main issue is to approve the laws that we are discussing in the Congress, and now they are paralyzed, unfortunately. But the government has been trying to address the issue how it can, you know, even without the regulation being approved yet. Our elections court is trying to launch some other regulations and normatives to deal with this kind of thing. But this is a challenge that we really need to address properly in a structural manner. And I think the civil society is very important, actually, in all the moments since 2022, actually, when we also had widespread disinformation during the elections and also in 2018. I think the pressure that the civil society makes is very important also. to ensure that governments try to deal with this in a structural and effective manner. So this is an ongoing discussion still. We are trying to, on the one hand, negotiate and talk to the platforms and try to ensure that they apply their own moderation rules, but also I think we need to advance in the discussion on regulation, actually.

Yasmin Curzi: Thank you so much, Maria. I think we have one more question. I don’t know if any of you wanted to address this question as well. Go ahead, Dennis. You start.

Dennis Redeker: Thank you. It’s just a way of also integrating, obviously, different sessions here at the TIGF. Yesterday, there was a workshop on the question of AI and disinformation during elections, and so your question is so very much valued, and that spoke to that conversation that we had. A researcher from Oxford University, Roxana Radu, brought up the case of the Romanian elections that took place recently, and that, in fact, a court cancelled the results of the presidential elections due to the apparent benefiting of one candidate by a major social media platform, and so the question of transparency is quite acute. The question around information integrity around elections is very important, and this debate that we had illustrated this, illustrated all the challenges that we have with regard to information integrity, and it would be much better, obviously, to have solutions in place beforehand so that elections can take place uninhibited by this rather than having to cancel them afterward. That’s a very dramatic and potentially also problematic democratic practice. So I think that’s something that we brought up in that session. I just wanted to feed that back into this debate.

Ana Cristina Ruelas: Yeah, well, I wanted to mention also that, so for instance, in the guidelines for the governance of digital platforms, one of the things that we highlight in a very important manner is that we need to be careful of labeling content because although it’s true that there’s a lot of systematic disinformation, systematic campaigns of disinformation that can be recognized as subject of restriction, there’s specific pieces of content that should be gone through a due process of law. And this is important because if we consider that elections actually is a moment where there’s special protections for freedom of expression, because it is also important for society to have a plurality and access to information, to a variety of information, we cannot just go and try to regulate, let’s say, a trend or a specific type of content. That’s why one of the things that we, and I’m glad that I was after Dennis, because he mentioned transparency. And I think that what we need to think as civil society, as international organizations, is that we need to ensure that platforms are accountable about the management of systemic risks that they see and they foresee when it comes to elections. So we saw during this elected 2024 election that some of the platforms publish the risk assessment that they did before the process started, but we don’t know, for instance, right now how they evaluated that measures, how often they continue doing this assessment because risk is not a static, risk moves and it is important that when there’s like a process of election, they can move around it. We also don’t know if they actually reinforce some measures that they have in, let’s say, peaceful times and not election times. We do not know how they manage advertising, political advertising during non-election times and during election times. So I think there’s many different elements on transparency, due diligence and accountability that we need to start like putting forward and specifically reinforcing during election times that will be key for the next election periods. And in our side and also on the government side, it is very important to, you know, if that governments also in electoral management bodies try to be transparent on what is the type of requirements that they are placing during the election time and obviously outside elections, but primarily during the election time to platforms. What are the type of content that they are requiring to moderate? What is, for instance, the type of requirements that they are placing for the staff members? This different type of things, you know, when it comes to this multi-stakeholder approach, you know, how also government is being transparent. transparent about the acts that they are putting forward during electoral times, because, you know, it can, it’s, it’s on both sides. So I think that I will share in the chat the guidelines, but there’s these specific elements that I think that are important to move forward. And there’s also a new discussion on those specific measures that should be reinforced during this period, acknowledging that, you know, elections do not start with the electoral, you know, with that extra precedent, but it’s that whole cycle that needs to be revised and updated. And definitely civil society actors should be participating.

Panelist: I just wanted to come in here to just add and also to follow up with, follow up on Anna Christina’s point on the role of electoral management bodies. I think that’s very crucial because in India, speaking from my context, there has been a recognition by the Election Commission on the role played by social media platforms influencing the discourse. So there has been some guidelines of sorts put up to, so they now actively also look into the social media activity. And there are some set of guidelines and the candidates of the political parties have to follow. But the problem here again, is the implementation. So there have been an instance, not one instance, a couple of instances where a major political party had put out a hateful and divisive video against a particular community and that had gone viral. And it took time for, it took a lot of time for the commission to take down, to ask the platform, the platform hadn’t taken down. So it took time to, for the video itself to be taken down. By that time, you know, it must have gone viral. So I think that that coordination between the electoral management, they should more actively look out for such instances. They can’t just trust the social media platforms to monitor the content. So I think the electoral management body, the regulators’ role here is very important. And I mean, they should come up with some comprehensive and concrete SOPs or whatever. That’s very contextualized to the digital world. So yeah, I just want to highlight the importance of coordination between the electoral bodies, the platforms during the election time. Thank you.

Yasmin Curzi: Thank you so much. I have a question of my own actually that I wanted to hear from you. Actually, Mary touched upon a bit on this. Should the responsibility of balancing freedom of expression should lie solely with platform providers or governments and regulatory bodies have a role to play? Which role should be this? And how do you see the stakeholders’ role in this debate? Any experiences of platform regulation that are going well in any countries? The court should have a permanent role as it is, for example, here in Brazil. Or should we think of regulatory bodies within the government, for example, as it is in Turkey or other countries, to more United Kingdom as well? So what do you think about this arrangements? Is there like a good model that we should look as on a horizon or this is not possible yet?

Ana Cristina Ruelas: So. I would not say that we can just yet say that there’s a perfect model. I think that in many of the cases, this is something that we are looking at that is just the proof. For instance, in the case of the European Union, the DSA, the Digital Service Act, just had the first published of the human rights assessment, the risk assessment, just like two weeks ago, three weeks ago. Still, researchers are looking at what they find there. The UK Safety Act is still also in the process of just publishing the mechanisms for managing risk. I think there’s a lot you know, different, you know, to see and for researchers, for civil society to see how this is actually changing the behaviour also for the platforms and engaging new actors in the discussion of digital platform awareness, definitely. In the Asia-Pacific region, Australia has a lot of advances on safety mechanisms that have been put in place. So there’s different models that we have seen all over the place. I think regulatory authorities are also changing also their view and their narratives in order to think broadly on systems and processes, which I think is very, very important because during the 2015 to 2020, we still saw a lot of the solutions as to criminalise content. And now we’re going more on the path of trying to identify how the systems and these processes should be, you know, how this platform should be accountable about the systems and processes that they put in place in order to identify potential harmful content and to deal, you know, etc. with these issues. I think, as I said in the in my last intervention that this published human rights assessments that the platforms did before elections are a good step forward. It is definitely not the only thing that they should do, but I think it is important. And I definitely don’t think that it only relies on the platforms. I think that in order to enable the environment for freedom of expression online, there should be participation of states and there should be participation of civil society, academics, et cetera. But specifically from states actors, one of the things that was mentioned by Mary was this, how do you ensure that once you’re included in the internet ecosystem, you have the digital tools to engage properly and have control over your own content, over your products, et cetera. This, everything means an interaction of different stakeholders that is not only governments and regulators or platforms. So I think there’s definitely a duty, I would say, and it was said as well, from state, for instance, also to protect journalists, you know, which are then, who are then our critical voices that we need to protect also in the digital space. We definitely need investment from the governance system in media development, considering that platforms have already taken most of the advertising models for like for traditional media. So I think there’s many different actions that should be taken by different stakeholders. And it’s not just one side of the, only one thing that corresponds to one on the actor.

Panelist: Yeah, I also, I mean, I’m actually wary of giving either the platform or the state the control over how to, you know, regulate the speech. But both of them have their, you know, I mean, speaking from the Indian laws, I can say that earlier, it allowed platforms to like, platforms were not required to, like that safe harbor protection, right, as long as they are not overseeing it, they, they cannot be held liable. And then there was a court order, which said that, you know, you know, platforms needn’t actually moderate, and actually, all the speech decisions should come from the court, any dispute that you have, it has to go to the court. And if the court says that a content has to be removed, it needs to be removed. But then the problem with that is we know in the in the in the context of internet media, you know, take going to the court and getting an order. It’s really not, you know, helpful for a lot of content, especially related to, you know, online database violence, where quick removal of content is necessary. So then that leads us to rely on the platforms again. But and then there have been, and then we realized that, okay, platforms are not doing the role. So in India, we now currently have a system where if someone is aggrieved of a decision of the platform, then they can go to a government body, which will obviously have government bureaucrats looking at into the decision. And I am there have been concerns about how fair that is going to be. So yeah, I think it’s a tricky thing as to who should have the ultimate say. And as Anna Christina says, it should be a collaborative effort. I think especially I mean, I just want to talk about the on the Australian eSafety Commissioner where they collaborate with the platforms to actively take down content. So I think the civil society role is also important in really taking down harmful content from the internet. Yeah. Thank you.

Yasmin Curzi: Dennis, do you want to add something? Anything as well, or should we break up? No?

Moderator: If not, I might throw in one curveball in the discussion. Have you thought about platforms or platform-like things that are not centralized in a way that they could be controlled by the management? There are some chat tools that are based on onion routing or peer-to-peer communication or whatever, and there is basically no central point where you can choke point, or no company behind it that you can give orders to. How would you regulate those? But if anything can be done about those. Too difficult question, I’m afraid, at this point.

Yasmin Curzi: Does anyone want to address this?

Panelist: Yeah, I mean, I may not have an answer exactly how to regulate them, because even I also feel that, so when we talk about dismantling the centralized structures, we then say, oh, we should have, even in my intervention, I said, we should foster the development of alternative communication platforms, which are decentralized. And there have been excellent, like for example, Mastodon is a federated model, and then there have been instances like Social Corp, which runs on a co-operative model. So while this poses an alternative to the current centralized model, there are still challenges there with respect to moderation. So I’m not really sure how to regulate such platforms, but I feel that they should be explored, and they do have their advantages in giving more, in enabling, instead of some centralized team of moderators, it allows the users also to engage in moderation activities. But again, I don’t know how much it can be scaled on a larger level. where that’s accessible to a lot of people and how to deal with the moderation problem is really something that I’m also thinking about. I would like to hear if anybody else have an opinion on this.

Moderator: Just noted that we have four minutes to go and that Fediverse, Mastodon type thing is solvable, I think, to an extent that it does offer, each Mastodon server is still a server that’s controlled by someone. They have their different moderation policies, but some different approaches are much harder to moderate. But anyway, well, anybody, I have spoken enough, Yasmin, you can carry on whatever you want to say, but three and a half minutes.

Yasmin Curzi: Thank you so much, Stefani. Thank you, Dennis, also for partnering here with the CPR and the IRPC folks as well. Thanks, Anna-Christina and Mary Mohammed for joining us here. Maira Sato had to leave a bit earlier. Thanks, everyone who is watching us online or in person. Apologies again for the technical glitches and et cetera. See you next year. Please keep up with the CPR and the IRPC activities. We have mailing lists if you want to subscribe to us and also to engage with our internal activities within the coalition. Please write me or Dennis, depending on which coalition that you want to join. We much appreciate new people coming in. Thank you so much once again. Hope that you enjoyed the session. See you next time. Bye-bye.

Moderator: Dennis, you have two minutes if you want to say something else.

Dennis Redeker: I just want to say, Richard, thank you so much everyone for speakers, moderators, technical team, and thank you so much for the session and we’ll see you next year at the IGF. Please check out our website and join the email lists.

Yasmin Curzi: So we are finishing in time, more than a minute to go. Perfect. See you in Oslo. Bye bye.

M

Marrin Muhammed

Speech speed

143 words per minute

Speech length

1441 words

Speech time

600 seconds

Need for structural reforms to address root causes of misinformation

Explanation

Marrin argues that current approaches to information integrity are insufficient and focus on symptomatic remedies. She calls for structural reforms targeting platform business models, incentive structures, and techno-design architecture to address the root causes of misinformation.

Evidence

Examples given include imposing a statutory duty of care on platforms, ex-ante licensing, and regulations targeting attention-driven algorithms.

Major Discussion Point

Information Integrity and Platform Responsibility

Agreed with

Ana Cristina Ruelas

Yasmin Curzi

Agreed on

Need for multi-stakeholder approach to information integrity

Differed with

Ana Cristina Ruelas

Maira Sato

Differed on

Approach to regulating digital platforms

A

Ana Cristina Ruelas

Speech speed

138 words per minute

Speech length

2003 words

Speech time

865 seconds

Importance of multi-stakeholder approach and human rights framework

Explanation

Ana Cristina emphasizes the need for a multi-stakeholder approach to information integrity, involving governments, platforms, civil society, and academics. She highlights the importance of using a human rights framework to guide platform governance.

Evidence

Reference to UNESCO’s guidelines for the governance of digital platforms and the Global Digital Compact

Major Discussion Point

Information Integrity and Platform Responsibility

Agreed with

Marrin Muhammed

Yasmin Curzi

Agreed on

Need for multi-stakeholder approach to information integrity

Differed with

Marrin Muhammed

Maira Sato

Differed on

Approach to regulating digital platforms

Need for transparency and accountability from platforms, especially during elections

Explanation

Ana Cristina emphasizes the importance of platforms being transparent about their risk assessments and mitigation measures during elections. She argues for ongoing accountability and transparency in how platforms manage systemic risks.

Evidence

Reference to recent platform publications of risk assessments before election processes

Major Discussion Point

Regulation of Digital Platforms

Agreed with

Maira Sato

Panelist

Agreed on

Importance of platform transparency and accountability

Importance of platform transparency and risk assessment during elections

Explanation

Ana Cristina emphasizes the need for platforms to be transparent about their risk assessments and mitigation measures during elections. She argues for ongoing evaluation and accountability of these measures throughout the electoral cycle.

Evidence

Reference to recent platform publications of risk assessments before election processes

Major Discussion Point

Disinformation and Elections

M

Maira Sato

Speech speed

132 words per minute

Speech length

1239 words

Speech time

559 seconds

Brazil’s efforts to promote information integrity nationally and internationally

Explanation

Maira discusses Brazil’s initiatives to address information integrity, including the creation of a Secretariat for Digital Policies and the launch of programs to combat disinformation. She also highlights Brazil’s role in promoting information integrity in international forums like the G20.

Evidence

Examples include the Health with Science program and Brazil’s contribution to including information integrity in the G20 agenda

Major Discussion Point

Information Integrity and Platform Responsibility

Agreed with

Ana Cristina Ruelas

Panelist

Agreed on

Importance of platform transparency and accountability

Differed with

Marrin Muhammed

Ana Cristina Ruelas

Differed on

Approach to regulating digital platforms

Brazil’s experience with platform regulation attempts

Explanation

Maira discusses Brazil’s efforts to regulate digital platforms, particularly in the context of elections. She highlights the challenges faced in implementing effective regulations and the ongoing discussions in the Brazilian Congress.

Evidence

Mention of incidents during municipal elections in São Paulo and the paralysis of relevant laws in Congress

Major Discussion Point

Regulation of Digital Platforms

Need for structural solutions to address disinformation

Explanation

Maira emphasizes the need for structural solutions to address disinformation during elections. She argues for the importance of approving relevant laws and regulations to effectively deal with the issue.

Evidence

Reference to ongoing discussions in the Brazilian Congress and efforts by the elections court to launch regulations

Major Discussion Point

Disinformation and Elections

Y

Yasmin Curzi

Speech speed

123 words per minute

Speech length

1948 words

Speech time

950 seconds

Feminist approach to information integrity to address inequalities

Explanation

Yasmin proposes using feminist scholarship to inform information integrity debates and policymaking. She argues for addressing inequalities in media ecosystems and enabling more diverse participation in content production and moderation.

Evidence

Reference to an article on Gender Information Integrity

Major Discussion Point

Information Integrity and Platform Responsibility

Agreed with

Ana Cristina Ruelas

Marrin Muhammed

Agreed on

Need for multi-stakeholder approach to information integrity

Balancing freedom of expression with content moderation

Explanation

Yasmin raises the question of how to balance freedom of expression with the need for content moderation on digital platforms. She asks about the roles of different stakeholders in this process and whether there are good regulatory models to follow.

Major Discussion Point

Regulation of Digital Platforms

D

Dennis Redeker

Speech speed

150 words per minute

Speech length

1552 words

Speech time

616 seconds

Survey data on user concerns about misinformation across countries

Explanation

Dennis presents findings from a 41-country survey on user attitudes towards social media platforms. The data shows varying levels of concern about misinformation across different countries and regions.

Evidence

Survey results showing highest concern in Poland and lowest in Haiti, with variations across Global North and South

Major Discussion Point

Information Integrity and Platform Responsibility

AI and disinformation risks in elections

Explanation

Dennis mentions the emerging challenges posed by AI in spreading disinformation during elections. He highlights the need for proactive solutions to address these issues before they impact electoral processes.

Evidence

Reference to a recent case in Romania where election results were cancelled due to apparent platform manipulation

Major Discussion Point

Disinformation and Elections

M

Moderator

Speech speed

142 words per minute

Speech length

415 words

Speech time

174 seconds

Challenges in regulating decentralized platforms

Explanation

The moderator raises the question of how to regulate decentralized platforms or communication tools that lack a central point of control. This highlights the complexity of applying traditional regulatory approaches to new technological structures.

Major Discussion Point

Regulation of Digital Platforms

P

Panelist

Speech speed

161 words per minute

Speech length

832 words

Speech time

308 seconds

Role of electoral management bodies in platform regulation

Explanation

The panelist highlights the crucial role of electoral management bodies in monitoring and regulating social media content during elections. They argue for better coordination between these bodies and platforms to effectively address harmful content.

Evidence

Example from India where the Election Commission has recognized the influence of social media platforms on electoral discourse

Major Discussion Point

Regulation of Digital Platforms

Agreed with

Ana Cristina Ruelas

Maira Sato

Agreed on

Importance of platform transparency and accountability

Role of electoral bodies in monitoring social media content

Explanation

The panelist highlights the importance of electoral management bodies actively monitoring social media content during elections. They argue for better coordination between these bodies and platforms to effectively address harmful content.

Evidence

Example from India where the Election Commission has recognized the influence of social media platforms on electoral discourse

Major Discussion Point

Disinformation and Elections

A

Audience

Speech speed

106 words per minute

Speech length

224 words

Speech time

126 seconds

Concerns about influencers and harmful content during Brazilian elections

Explanation

An audience member raises concerns about the rise of influencers and the spread of harmful content during Brazilian municipal elections. They highlight the lack of platform responsibility in moderating such content and its impact on the electoral process.

Major Discussion Point

Disinformation and Elections

Agreements

Agreement Points

Need for multi-stakeholder approach to information integrity

Ana Cristina Ruelas

Marrin Muhammed

Yasmin Curzi

Importance of multi-stakeholder approach and human rights framework

Need for structural reforms to address root causes of misinformation

Feminist approach to information integrity to address inequalities

Speakers agree on the importance of involving multiple stakeholders, including governments, platforms, civil society, and academics, in addressing information integrity issues.

Importance of platform transparency and accountability

Ana Cristina Ruelas

Maira Sato

Panelist

Need for transparency and accountability from platforms, especially during elections

Brazil’s efforts to promote information integrity nationally and internationally

Role of electoral management bodies in platform regulation

Speakers emphasize the need for digital platforms to be transparent about their content moderation practices and risk assessments, particularly during elections.

Similar Viewpoints

Both speakers argue for addressing structural inequalities in the media ecosystem and promoting diverse participation in content production and moderation.

Marrin Muhammed

Yasmin Curzi

Need for structural reforms to address root causes of misinformation

Feminist approach to information integrity to address inequalities

Both speakers highlight the importance of international cooperation and multi-stakeholder approaches in addressing information integrity issues.

Ana Cristina Ruelas

Maira Sato

Importance of multi-stakeholder approach and human rights framework

Brazil’s efforts to promote information integrity nationally and internationally

Unexpected Consensus

Challenges in regulating decentralized platforms

Moderator

Panelist

Challenges in regulating decentralized platforms

Role of electoral management bodies in platform regulation

There was an unexpected acknowledgment of the difficulties in regulating decentralized platforms, with both the moderator and a panelist recognizing the complexity of applying traditional regulatory approaches to new technological structures.

Overall Assessment

Summary

The main areas of agreement include the need for a multi-stakeholder approach to information integrity, the importance of platform transparency and accountability, and the recognition of structural challenges in addressing misinformation.

Consensus level

There is a moderate level of consensus among the speakers on the broad principles of addressing information integrity. However, there are variations in the specific approaches and solutions proposed. This level of consensus suggests that while there is agreement on the importance of the issue, there is still room for debate on the most effective ways to implement solutions.

Differences

Different Viewpoints

Approach to regulating digital platforms

Marrin Muhammed

Ana Cristina Ruelas

Maira Sato

Need for structural reforms to address root causes of misinformation

Importance of multi-stakeholder approach and human rights framework

Brazil’s efforts to promote information integrity nationally and internationally

While Marrin advocates for structural reforms targeting platform business models, Ana Cristina emphasizes a multi-stakeholder approach, and Maira focuses on national and international policy efforts.

Unexpected Differences

Regulation of decentralized platforms

Moderator

Panelist

Challenges in regulating decentralized platforms

Role of electoral management bodies in platform regulation

The moderator raised an unexpected point about the difficulty of regulating decentralized platforms, which contrasts with the focus on regulating centralized platforms discussed by other speakers.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to regulating digital platforms, the balance between freedom of expression and content moderation, and the specific mechanisms for ensuring platform accountability during elections.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of addressing information integrity and platform responsibility, speakers differ on the specific approaches and mechanisms to achieve these goals. These differences reflect the complexity of the issue and the need for further dialogue and research to develop effective solutions.

Partial Agreements

Partial Agreements

All speakers agree on the need for better regulation of platforms during elections, but differ on the specific mechanisms and responsible bodies to achieve this goal.

Ana Cristina Ruelas

Panelist

Maira Sato

Need for transparency and accountability from platforms, especially during elections

Role of electoral management bodies in platform regulation

Brazil’s experience with platform regulation attempts

Similar Viewpoints

Both speakers argue for addressing structural inequalities in the media ecosystem and promoting diverse participation in content production and moderation.

Marrin Muhammed

Yasmin Curzi

Need for structural reforms to address root causes of misinformation

Feminist approach to information integrity to address inequalities

Both speakers highlight the importance of international cooperation and multi-stakeholder approaches in addressing information integrity issues.

Ana Cristina Ruelas

Maira Sato

Importance of multi-stakeholder approach and human rights framework

Brazil’s efforts to promote information integrity nationally and internationally

Takeaways

Key Takeaways

Information integrity requires a multi-stakeholder approach involving governments, platforms, civil society, and other actors

Structural reforms are needed to address root causes of misinformation, including platform business models

Human rights and feminist frameworks can inform approaches to information integrity

Transparency and accountability from platforms is crucial, especially during elections

Regulation of digital platforms remains challenging, particularly for decentralized platforms

Resolutions and Action Items

Brazil to launch national chapter of Global Initiative for Information Integrity on Climate Change

UNESCO to manage global fund supporting research and communication projects on information integrity

Continued development of guidelines and principles for digital platform governance

Unresolved Issues

How to effectively regulate decentralized platforms

Balancing freedom of expression with content moderation

Scaling up alternative, decentralized communication platforms while addressing moderation challenges

Finding political will to implement structural reforms of platform business models

Suggested Compromises

Collaborative efforts between platforms, governments, and civil society for content moderation

Balancing platform self-regulation with government oversight and judicial review

Exploring decentralized platform models while maintaining some centralized moderation capabilities

Thought Provoking Comments

The liberal regulatory playbook of supplier-focused and consumer-focused remedies to restoring health and vibrancy of the public sphere does not address the root cause of the problem, which is the business model of the platforms, the techno-design architecture that foster the hostile digital environment.

speaker

Marrin Muhammed

reason

This comment challenges the conventional approach to regulating platforms and argues for more fundamental structural changes.

impact

It shifted the discussion towards considering more radical solutions like changing platform business models and structures rather than just tweaking existing regulations.

Brazil has been very active to include this concept nationally and internationally in the debate. So, as Brazil was in the presidency of the G20 last year, this year actually, until November, we worked to include this issue, the issue of information integrity in the G20’s agenda and we managed to include an item on information integrity in the G20’s working group on digital economy.

speaker

Maira Sato

reason

This comment highlights concrete policy actions being taken at an international level to address information integrity.

impact

It brought the discussion from theoretical concepts to real-world policy implementation, showing how the debate is translating into action at high levels of government.

The UN Global Principles describe information integrity as, and even in the GDC, when you see GDC, you can see the similar language as fostering a pluralistic information space, one that enables trust, knowledge and individual choice. And no one can dispute that these are important values to achieve. But the question is, what do they mean in practical terms?

speaker

Marrin Muhammed

reason

This comment critically examines the practical implications of high-level principles on information integrity.

impact

It prompted a deeper discussion on how abstract principles can be translated into concrete actions and policies in different contexts.

We need policies to actually decentralize media monopoly. I’m not only talking about the big texts and specifically but also media in general, the TV channels and etc. We need actions and policymaking to enable to foster media diversity in this sense

speaker

Yasmin Curzi

reason

This comment broadens the scope of the discussion beyond just digital platforms to include traditional media, emphasizing the need for overall media diversity.

impact

It expanded the conversation to consider the broader media ecosystem and how it interacts with digital platforms in shaping information integrity.

I think regulatory authorities are also changing also their view and their narratives in order to think broadly on systems and processes, which I think is very, very important because during the 2015 to 2020, we still saw a lot of the solutions as to criminalise content. And now we’re going more on the path of trying to identify how the systems and these processes should be, you know, how this platform should be accountable about the systems and processes that they put in place in order to identify potential harmful content

speaker

Ana Cristina Ruelas

reason

This comment highlights an important shift in regulatory approaches from content-focused to process-focused strategies.

impact

It provided historical context to the evolving regulatory landscape and pointed towards future directions in platform governance.

Overall Assessment

These key comments shaped the discussion by moving it from theoretical concepts to practical implementation challenges, emphasizing the need for structural changes in platform business models, highlighting the importance of international cooperation, and broadening the scope to include traditional media. The discussion evolved from identifying problems to exploring concrete solutions and policy actions, while also critically examining the practical implications of high-level principles on information integrity.

Follow-up Questions

How to find the political will to implement structural reforms targeting platform business models and incentive structures?

speaker

Marrin Muhammed

explanation

This was highlighted as a crucial missing piece in addressing the root causes of information integrity issues.

How can we operationalize the concept of information integrity as a public policy?

speaker

Maira Sato

explanation

Brazil is actively working on implementing this concept in specific sectors, indicating a need for practical approaches.

What are effective ways to assess the performance of social media platforms during elections globally?

speaker

Audience member (Julia)

explanation

This is crucial for understanding and improving platform accountability during critical democratic processes.

What are the most unacceptable outcomes in electoral processes related to social media, and how can they be prevented in the future?

speaker

Audience member (Julia)

explanation

Identifying these outcomes is essential for developing targeted strategies to protect election integrity.

What specific roles can civil society play in ensuring unacceptable outcomes are identified, addressed, and taken seriously by platforms?

speaker

Audience member (Julia)

explanation

Understanding civil society’s role is important for creating a comprehensive approach to platform accountability.

Is there a good model for balancing freedom of expression and platform regulation that we should look to as a horizon?

speaker

Yasmin Curzi

explanation

Identifying effective regulatory models is crucial for developing best practices in platform governance.

How can decentralized or peer-to-peer communication platforms be regulated?

speaker

Tapani Tarvainen

explanation

This question addresses the challenges of regulating emerging technologies that lack centralized control points.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.