Protect people and elections, not Big Tech! | IGF 2023 Town Hall #117

10 Oct 2023 07:30h - 08:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Daniel Arnaudo

In 2024, several countries, including Bangladesh, Indonesia, India, Pakistan, and Taiwan, are set to hold elections, making it a significant year for democracy. However, smaller countries often do not receive the same level of attention and support when it comes to content moderation, policies, research tools, and data access. This raises concerns about unfair treatment and limited resources for these nations.

Daniel highlights the need for improved data access for third-party researchers and civil society, particularly in smaller countries. Currently, there is a disinvestment in civic integrity, trust, and safety, which further exacerbates the challenges faced by these nations. Platforms are increasingly reducing third-party access to APIs and other forms of data, making it harder for researchers and civil society to gather valuable insights. Large countries often control access systems, resulting in high barriers for smaller nations to access data.

Another pressing issue raised is the insufficient addressing of threats faced by women involved in politics on social media platforms. Research shows that women in politics experience higher levels of online violence and threats. Daniel suggests that platforms establish mechanisms to support women and better comprehend and tackle these threats. Gender equality should be prioritised to ensure that women can participate in politics without fear of harassment or intimidation.

To effectively navigate critical democratic moments, such as elections or protests, social media platforms should collaborate with organisations that possess expertise in these areas. Daniel mentions the retreat from programs like the Trusted Partners at Meta and highlights the potential impacts on elections, democratic institutions, and the bottom lines of these companies. By working alongside knowledgeable organisations, platforms can better understand and respond to the needs and challenges of democratic events.

Algorithmic transparency is a desired outcome, but it proves to be a complex issue. While it has the potential to improve accountability and fairness, there are risks of manipulation or gaming the system. Striking the right balance between transparency and safeguarding against misuse is a delicate task that requires careful consideration.

Smaller political candidates seeking access to reliable and accurate political information need better protections. In order to level the playing field, it is crucial to provide resources and support to candidates who may not have the same resources as their larger counterparts.

The data access revolution is transforming how companies provide access to their systems. This shift enables greater innovation and collaboration, particularly in industries like infrastructure and industry. Companies should embrace this transformation and strive to make their systems more accessible, promoting inclusivity and reducing inequalities.

Deploying company employees in authoritarian contexts poses challenges. Under certain regulations, these employees might become bargaining chips, compromising the companies’ integrity and principles. It is essential to consider the potential risks and implications before making such decisions.

Furthermore, companies should invest in staffing and enhancing their understanding of local languages and contexts. This investment ensures a better response to users’ needs and fosters better cultural understanding, leading to more effective and inclusive collaborations.

In conclusion, 2024 holds significant democratic milestones, but there are concerns about the attention given to smaller countries. Improving data access for researchers and civil society, addressing threats faced by women in politics, working with organisations during critical democratic moments, and promoting algorithmic transparency are crucial steps forward. Protecting smaller political candidates, embracing the data access revolution, considering the risks of deploying employees in authoritarian contexts, and investing in local understanding are additional factors that warrant attention for a more inclusive and balanced democratic landscape.

Audience

The analysis raises a number of concerns regarding digital election systems, global media platforms, data access for research, and the integrity of Russia’s electronic voting systems. It argues that digital election systems are susceptible to cyber threats, citing a disruption in Russian elections caused by a denial of service attack from Ukraine. This highlights the need for improved cybersecurity measures to safeguard the accuracy and integrity of digital voting systems.

Concerns are also raised about the neutrality and transparency of global media platforms. It is alleged that these platforms may show bias by taking sides in conflicts, potentially undermining their neutrality. Secret recommendation algorithms used by these platforms can influence users’ news feeds, and this lack of transparency raises questions about the information users are exposed to and the influence these algorithms can have on public perception. The analysis also notes that in certain African countries, platforms like Facebook serve as the primary source of internet access for many individuals, highlighting the importance of ensuring fair and unbiased information dissemination.

Transparency in global media platforms’ recommendation algorithms is deemed necessary. The analysis argues that platforms like Facebook have the power to ignite revolutions and shape public discourse through these algorithms. However, the lack of understanding about how these algorithms work raises concerns about their impact on democratic processes and the formation of public opinion.

The analysis also highlights the challenges of accessing data for academic and civil society research, without specifying the nature or extent of these challenges. It takes the position that measures need to be taken to fight against data access restrictions in order to promote open access and support research efforts in these fields.

The integrity of Russia’s electronic voting systems is called into question, despite the Russian Central Election Commission not acknowledging any issues. These systems, developed by big tech companies Kaspersky and Rostelecom, lacked transparency and did not comply with the recommendations of the Russian Commission, raising doubts about their reliability and potential for manipulation.

The use of social media platforms, particularly Facebook, for political campaigning in restrictive political climates is also deemed ineffective. The analysis argues that these platforms may not effectively facilitate individual political campaigns. Supporting facts are provided, such as limited reach and targeting capabilities of Facebook’s advertising algorithms and the inability to use traditional media advertisements in restrictive regimes. An audience member with experience managing a political candidate page on Facebook shares their negative experience, further supporting the argument that social media platforms may not be as effective as traditional methods in certain political contexts.

In conclusion, the analysis presents a range of concerns regarding the vulnerabilities of digital election systems, the neutrality and transparency of global media platforms, challenges in data access for research, and the integrity of Russia’s electronic voting systems. It emphasizes the need for enhanced cybersecurity measures, transparency in recommendation algorithms, increased support for data access in research, and scrutiny of electronic voting systems. These issues have significant implications for democracy, public opinion, academic progress, and political campaigning in an increasingly digital and interconnected world.

Ashnah Kalemera

Social media platforms and the internet have the potential to play a significant role in electoral processes. They can support various aspects such as voter registration, remote voting, campaigns, voter awareness, results transmission, and monitoring. These platforms are critical in ensuring that voter registration is complete and accurate, enabling remote voting for excluded communities and remotely based voters, supporting campaigns and canvassing, as well as voter awareness and education, facilitating results transmission and tallying, and monitoring malpractice.

However, technology also poses threats to electoral processes, especially in Africa. Authoritarian governments leverage the power of technology for their self-serving interests. They actively use disinformation and hate speech to manipulate narratives and public opinion during elections. Various actors, including users, governments, platforms themselves, private companies, and PR firms, contribute to this manipulation by spreading disinformation and hate speech.

The thriving of disinformation and hate speech in Africa can be attributed to the increasing penetration of technology on the continent. This provides a platform for spreading false information and inciting hatred. Additionally, the growing youth population, combined with characteristic ethnic, religious, and geopolitical conflicts, creates an environment where disinformation and hate speech can flourish.

To combat the spread of disinformation, it is crucial for big tech companies to collaborate with media and civil society. However, limited collaboration exists between these actors in Africa, and concerns arise regarding the slow processing and response times to reports and complaints, as well as the lack of transparency in moderation measures.

Research, consultation, skill-building, and strategic litigation are identified as potential solutions to address the challenges posed by big tech’s involvement in elections and the spread of disinformation. Evidence-driven advocacy is important, and leveraging norm-setting mechanisms can help raise the visibility of these challenges. Challenging the private sector to uphold responsibilities and ethics, as outlined by the UN guiding principles on business and human rights, is also essential.

Addressing the complex issues surrounding big tech, elections, and disinformation requires a multifaceted approach. While holding big tech accountable is crucial, it is important to recognize that the manifestations of the problem vary from one context to another. Therefore, stakeholder conversations must acknowledge and address the different challenges posed by disinformation.

Data accessibility plays a critical role in addressing these issues. Organizations like CIPESA have leveraged data APIs for sentiment analysis and monitoring elections. However, the lack of access to data limits the ability to highlight challenges related to big tech involvement in elections.

Furthermore, it is important to engage with lesser-known actors, such as electoral bodies and regional economic blocs, to effectively address these issues. Broader conversations that include these stakeholders can lead to a better understanding of the challenges and potential solutions.

In conclusion, social media platforms and the internet offer significant potential to support electoral processes but also pose threats through the spread of disinformation and hate speech. Collaboration between big tech, media, and civil society, as well as research, skill-building, and strategic litigation, are necessary elements in addressing these challenges. Holding big tech accountable and engaging with lesser-known actors are also crucial for effective solutions.

Moderator – Bruna Martins Dos Santos

Digital Action is a global coalition for tech justice that aims to ensure the accountability of big tech companies and safeguard the integrity of elections. Headquartered in Brazil, the coalition has been gaining support from various organizations and academics, indicating a growing momentum for their cause.

Founded in 2019, Digital Action focuses on addressing the impact of social media on democracies and works towards holding tech giants accountable for their actions. Their primary objective is to prevent any negative consequences on elections and foster collaboration by involving social media companies in the conversation.

Moreover, Digital Action seeks to empower individuals who have been adversely affected by tech harms. They prioritize amplifying the voices of those impacted and ensuring that their concerns are heard. Through catalyzing collective action, bridge-building, and facilitating meaningful dialogue, they aim to make a positive difference.

On a different note, the summary also highlights the criticism faced by social media companies for their lack of investment in improving day-to-day lives. This negative sentiment suggests that these companies may not be prioritizing initiatives that directly impact people’s well-being and societal conditions.

In conclusion, Digital Action’s global coalition for tech justice is committed to holding big tech accountable, protecting election integrity, and empowering those affected by tech harms. By involving social media companies and gaining support from diverse stakeholders, they aspire to create a more just and inclusive digital landscape. Additionally, the need for social media companies to invest in initiatives that enhance people’s daily lives is emphasized.

Yasmin Curzi

The legislative scenario in Brazil concerning platform responsibilities is governed by two main legislations. The Brazilian Civil Rights Framework, established in 2014, sets out fundamental principles for internet governance. According to Article 19 of this framework, platforms are only held responsible for illegal user-generated content if they fail to comply with a judicial order. The Code of Consumers Defense also recognises users as being vulnerable in their interactions with businesses.

However, the impact of measures to combat false information remains uncertain. Although platforms have committed to creating reporting channels and labelling content related to elections, there is a lack of detailed metrics to fully understand the effectiveness of these measures. There are concerns about whether content is being removed quickly enough to prevent it from reaching a wide audience. One concerning example is the case of Jovem Pão, which disseminated a fake audio during election day that had already been viewed 1.7 million times before removal.

The analysis indicates that social media and platforms’ content moderation have limited influence on democratic elections. Insufficient data and information exist about platforms’ actions and their effectiveness in combating false information. Content shared through official sources often reaches a wide audience before it is taken down. Despite partnerships with fact-checking agencies, it remains uncertain how effective platform efforts are in combating falsehood.

There is a pressing need for specific legislation and regulation of platforms to establish real accountability. Platforms currently fail to provide fundamental information such as their investment in content moderation. However, there is hope as the Data, Consumer Protection, and Regulation (DCPR) initiative has developed a framework for meaningful and interoperable transparency. This framework could guide lawmakers and regulators in addressing the issue.

Furthermore, platforms should improve their content moderation practices. Journalists in Brazil have requested information from Facebook and YouTube regarding their investment in content moderation but have received no response. Without the ability to assess the harmful content recommended by platforms, it becomes difficult to formulate appropriate public policies.

In conclusion, the legislative framework in Brazil regarding platform responsibilities comprises two main legislations. However, the impact of measures to combat false information remains uncertain, and the influence of social media and platform content moderation on democratic elections is limited. Specific legislation and regulation are needed to establish accountability, and platforms need to enhance their content moderation practices. Providing meaningful transparency information will facilitate accurate assessment and policymaking.

Alexandra Robinson

The vulnerability of online spaces and the ease with which domestic or foreign actors can manipulate and spread falsehoods is a growing concern, especially in terms of the manipulation of democratic processes. The use of new technologies like generative AI further complicates the issue, making it easier for malicious actors to deceive and mislead the public. This highlights the urgent need for stronger protections against online harms.

One significant observation is the glaring inequality between different regions in terms of protections from online harms. The disparity is particularly alarming, emphasizing the need for a more balanced and comprehensive approach to safeguarding online spaces. It is crucial to ensure that individuals worldwide have equitable protection against manipulation and disinformation.

Social media companies play a pivotal role in creating safe online environments for all users. This is particularly important with the upcoming 2024 elections, as these companies must fulfill their responsibilities to protect the integrity of democratic processes. However, concerns arise when examining the allocation of resources by these companies. Despite investing $13 billion in platform safety since 2016, Facebook’s use of its global budget for combating false information appears disproportionately focused on the US market, where only a fraction of its users reside. This skewed allocation raises questions regarding the equal treatment of users globally and the effectiveness of combating disinformation on a worldwide scale.

Furthermore, non-English languages pose a significant challenge for automated content moderation on various platforms, including Facebook, YouTube, and TikTok. Difficulties in moderating content in languages other than English can lead to a substantial gap in combating false information and harmful content in diverse linguistic contexts. Efforts must be made to bridge this gap and ensure that content moderation is effective in all languages, promoting a safer online environment for users regardless of their language.

In conclusion, the vulnerability of online spaces and the potential manipulation of democratic processes through the spread of falsehoods raise concerns that require urgent attention. Social media companies have a responsibility to create safe platforms for users worldwide, with specific emphasis on the upcoming elections. Addressing the inequities in protections against online harms, including the allocation of resources and challenges posed by non-English languages, is crucial for maintaining the integrity of online information and promoting a more secure digital environment.

Lia Hernandez

The speakers engaged in a comprehensive discussion regarding the role of digital platforms in promoting democracy and facilitating access to information. They emphasized the importance of independent tech work to advance digital rights across all Central American countries. Additionally, they highlighted the collaboration between big tech companies and electoral public entities, as the former provide tools to ensure the preservation of fundamental rights during election processes.

The argument put forth was that digital platforms should serve as valuable tools for promoting democracy and facilitating access to information. This aligns with the related United Nations Sustainable Development Goals, including Goal 10: Reduced Inequalities and Goal 16: Peace, Justice, and Strong Institutions.

However, concerns were raised about limitations on freedom of the press, information, and expression. Journalists in Panama faced obstacles and restrictions when attempting to communicate information of public interest. Of particular concern was the fact that the former President, Ricardo Martinelli, known for violating privacy, is a candidate for the next elections. This situation has the potential to lead to cases of corruption.

Furthermore, the speakers emphasized the necessity of empowering citizens, civil society organizations, human rights defenders, and activists. They argued that it is not only important to strengthen the electoral authority but also crucial to empower the aforementioned groups to ensure a robust and accountable democratic system. The positive sentiment surrounding this argument reflects the speakers’ belief in the need for a participatory and inclusive democracy.

However, contrasting viewpoints were also presented. Some argued that digital platforms do not make tools widely available to civil society but instead focus on providing them to the government. This negative sentiment highlights concerns about the control and accessibility of these tools, potentially limiting their efficacy in promoting democracy and access to information.

Additionally, the quality and standardisation of data used for monitoring digital violence were subject to criticism. The negative sentiment regarding this issue suggests that the data being utilised is unclean and lacks adherence to open data standards. Ensuring clean and standardised data is paramount to effectively monitor and address digital violence.

In conclusion, the expanded summary highlights the various perspectives and arguments surrounding the role of digital platforms in promoting democracy and access to information. It underscores the importance of independent tech work, collaboration between big tech companies and electoral entities, and empowering citizens and civil society organisations. However, limitations on freedom of the press, potential corruption, restricted access to tools, and data quality issues represent significant challenges that need to be addressed for the effective promotion of democracy and access to information.

Session transcript

Moderator – Bruna Martins Dos Santos:
So, I’m going to start off with a little bit of background on what is happening at the moment, and then I’m going to turn it over to my colleague, Mariana, to talk a little bit about what’s going on. Good afternoon, everybody. We’re just starting out one last issue with Zoom, but I’m going to start off with this session. Welcome to the town hall that’s called Protect People and Elections, Not Big Tech. We’re here to talk about the global coalition for tech justice, which is a group of people who are working on big tech accountability, how to safeguard elections, and trying to bring in a new conversation or improve the current ones about why should we care about elections and why should we make this conversation even closer to social media companies, right? The global coalition for tech justice is a global organization that is based in Brazil, and we’re here to talk about the global coalition for tech justice, which is a group of people who are with me at this panel, but we do have more and more organizations and academics joining this space to discuss some of the things that we are planning for today. And as for those of you that don’t know digital action, we were founded in 2019 and we have been working for a number of years on digital action. So, we have been working on a number of issues, right, about how social media affects democracies and how the other way around works as well, but our work has been evolving some work, some catalization of collective action, building bridges, and also ensuring those directly impacted by tech harms are those that are actually in power, are those the ones that we are listening to. So, during these four days that I’ve been here, it has been a catalyst for to stop the digital sick seal, to stop the 10 micro SDGs, to stop the data hijacking, stopctioning of justice, Social media companies invest less or much less in in in the day-to-day lives So that’s a little bit of what we want to do. I want to first Bring in Alexandra Pardal. She’s the global campaigns director at digital action and then she’s gonna open this panel for us and Explain a little bit more about the year of democracy campaign and what we’re all about Alex. I think you’re in the room, right?

Alexandra Robinson:
Yes, I am. Thank you Bruna and Wonderful to be with you here, so welcome to all our panelists and participants in Kyoto to get today and Those joining us from from elsewhere remotely This is a global Conversation on how to protect people and elections not big tech So I’m Alexandra Pardal from digital action a globally connected movement building organization With a mission to protect democracy and rights from digital threats in 2024 the year of democracy more than 2 billion people will be entitled to vote as US presidential and European parliamentary elections converge with national polls in India, Indonesia, South Africa, Rwanda, Egypt, Mexico, and some 50 other countries. The largest mega cycle of elections. We’ve seen in our lifetimes, but our information spaces and the ability to maintain the Integrity of information and uphold the truth and a shared understanding of reality are more are more vulnerable than ever From foreign and malign influence in elections the use of new tech like generative AI Making it easier for domestic or foreign actors to manipulate and lie to financially motivated globally active disinfo industries, the threats have never been bigger nor more pervasive. Elections are flashpoints for online harms and their offline consequences. Now, over the past four years, Digital Action has collaborated with hundreds of organisations in every continent, supporting the monitoring of digital threats to elections in the EU and elsewhere, and led large civil society coalitions demanding a strong Digital Services Act in the EU and better policy against hate and extremism from social media companies globally. This experience has taught us that there’s startling inequity between world regions when it comes to protections from harms. From disinformation, hate and incitement to manipulation of democratic processes, online platforms just aren’t safe for most people. We know that the platforms run by the world’s social media giants, Meta, Google, X and TikTok, have the greatest global reach they’ve ever had and are at their most powerful, but safeguarding efforts have been weak to protect information integrity globally. For instance, Facebook says it’s invested $13 billion in its platform safety and security since 2016, but internal documents show that in 2020, the company ploughed 87% of its global budget for time spent on classifying false or misleading information into the US, even though 90% of its users live elsewhere. This means there’s a dearth of moderators with cultural and linguistic expertise, where Facebook has been unable to effectively tackle disinformation at all times and most consequentially during elections where when disinformation and other online harms peak. Similarly, non-English languages have been a stumbling block for automated content moderation on YouTube, Facebook, or TikTok. Algorithms struggle to detect harmful posts in a number of languages in countries at risk of real-world violence and in democratic decline or autocracy. What this means is that the risks on the horizon in 2024 are very serious indeed, at a time when social media companies are cutting costs, laying off staff, and pulling back from their responsibilities to stem the flow of disinformation and protect the information space from bad actors. If some of the world’s largest and most stable democracies, the United States, Brazil, have been rocked by bad actors mobilizing on social media platforms, spreading election disinfo, and organizing violent assaults on the heart of their democracies, imagine next year, where we’ll see democracies under threat, like India, Indonesia, Tunisia, alongside a whole swathe of countries that are unfree or at risk, where citizens hope to hold onto spaces to resist the manipulation of the truth for autocratic purposes. How can online platforms be made safe to uphold information and electoral integrity and protect people’s rights? So the challenge of 2024’s elections megacycle is a calling to all of us to show up, ideate, and innovate, bring our skills, talents, and any power we have to the table and collaborate. As an example of what’s in the works and background to the perspectives we’re going to hear today, together with over 160 organizations now, experts and practitioners from across the world, we’ve convened the Global Coalition for Tech Justice to launch the 2024 Year of Democracy campaign in order to foster collective action, collaborations and coordination across election countries next year. Together with our members, the Global Coalition for Tech Justice will campaign, research, investigate and tell the stories of tech harm in global media, supporting and amplifying the efforts of those on the front lines and building policy solutions to address the global impacts of social media companies. So we’re going to be actively collaborating with stakeholders and this conversation today is an opportunity to further these conversations and get collaborations off the ground with all those who share goals of safe online platforms for all. So I’m delighted to introduce this session for this important global conversation on how we protect 2024’s mega cycle of elections from tech harms and ensure social media companies fulfill their responsibilities to make their products and platforms safe for all. So I’m really happy to hand back to Bruna to introduce our panelists and the discussion this morning. Thank you.

Moderator – Bruna Martins Dos Santos:
Thank you so much, Alex and welcome to the session as well. And as she just brought up, this is really a global conversation, right, that we want to do. We want to spark a discussion on how can we collectively ensure that big tech plays its part in protecting democracy and human rights in 2024 elections. It’s not just one, it’s 60 elections as everybody has been talking about this week. So it’s a rather key year for everyone. So we have two provocative questions, kickoff questions for the panelists and I’m gonna bring you, Ashna, into the conversation first. Ashna is programs coordinator, right, for CIPESA. And the first question for you would be whether, like, if you consider that social media platforms and content moderation or the lack of it are shaping democratic elections, and if so, how?

Ashnah Kalemera:
Thank you, Bruna. Good evening, everyone, or good morning, like Alex said. I guess we’re all in very different time zones at the moment. It’s a pleasure to be here. Thank you for the invitation, Digital Action, and the opportunity to have this very important discussion. Once again, my name is Ashna Kalemira, and I work with CIPESA. CIPESA is the Collaboration on International ICT Policy for East and Southern Africa. We are based out of Kampala, Uganda, but work across Africa promoting effective but inclusive technology policy, but also its implementation as it intersects with governance, good governance, obviously, human rights, upholding human rights, as well as improved livelihoods. So I like to start off these conversations on very light notes. Very often, these panels are dense in terms of spelling doom and gloom. So first, I’d like to emphasize that technology, broadly, including social media platforms and the internet, have huge potential for electoral processes and systems. They are critical in ensuring that voter registration is complete and accurate, enabling remote voting for excluded communities or remotely based voters. They have been critical in supporting campaigns and canvassing, as well as voter awareness and education, results transmission and tallying, monitoring malpractice, all of them critical to electoral processes and lending themselves to promoting legitimacy and inclusion. of elections in states that have democratic deficits, which for most of Africa is many of the states. So I think that light note is very important to highlight as we then go on to the doom and gloom that this conversation will likely take. And now we start the doom and gloom. Unfortunately, despite those opportunities, there are immense threats that technology poses for electoral processes in Africa and I guess for much of the world. Increasingly, we’re seeing states, the authoritarian governments especially, leveraging the power of technology for self-serving interests. A critical example there is network disruptions or shutdowns. I see Kiputon coalition members in the room and they work to push back on that excess. On disinformation and hate speech, users, governments, the platforms themselves as well as private companies, PR firms, actively influencing narratives during elections, undermining all the good stuff that I mentioned in the beginning. And very often we ask ourselves at CIPESA and I imagine everybody in the room, why disinformation thrives, right? Because pretty much everybody’s aware of the challenge that it poses, but in Africa especially, it’s thriving and thriving to very worrying levels. One of them is again something positive. It’s because technology is penetrating and penetrating very well on the continent. Previously unconnected communities now have access to information at the click of a button literally, which again in the context of elections is great, but in the case of disinformation, it’s a significant challenge. Secondly is the youth population on the continent with many of them coming online via social media. There’s always jokes in sessions that I’ve attended where there’s African representation that for many Africans, the internet is social media. And that challenge is enabling disinfo and hate speech to thrive. Third is conflicts. The elections that we’re talking about are happening in very challenging contexts that are characterized by ethnic, religious, and geopolitical conflicts. Again, all the nice stuff I mentioned earlier on is then cast with a really dark shadow. Like Alex mentioned, that context that I’ve just described is going to be a very significant stress test come 2024 and beyond for the continent. And we’re likely to see responses that undermine the potential of the technology to uphold electoral legitimacy, but also for citizens to realize their human rights. One of those reactions we’re likely to see from a state perspective is weaponization of laws to undermine voice or critical opinion online, which again undermines electoral processes and integrity. And unfortunately, given the context around conflicts, we’re likely to see a lot of politically, sorry, fueling politically motivated violence, which restricts access to credible information and ultimately perpetuates divides and hate speech and can lead to offline harms. Now, bringing the conversation back to big tech, on the continent, unfortunately, we’re seeing very limited collaboration between tech actors and media and civil society in, for instance, identifying, debunking or pre-banking, depending on which side of the fence you sit, and moderating disinformation. Also, the processing and response times to reports and complaints are really slow, and this is discouraging reporting and ultimately maximizing, in some cases, circulation of disinformation and hate speech. There are also significant challenges around opaqueness in moderation measures. We’ve seen the case in Uganda during the previous elections where a huge number of. were taken down for otherwise not very clear reasons, and that led to a response from the state, i.e. shutting down access to Facebook, which remains inaccessible to date in Uganda. So, given those pros and cons, and either side of the coins that I’ve just described for the African continent, it’s important to have collaborative actions and movements just like what Digital Action is spearheading and we’re really honored to be a part of. And efforts in that regard should focus on showing up and participating in consultation processes just like this or others, where there are opportunities to challenge or provide feedback and comments. I think that’s really important. Such spaces are not many. We at CIPESA host the annual forum on internet freedom in Africa. We marked 10 years a couple of days ago, and for the second time, we were able to have the meta oversight board present and able to engage. They admitted that cases from the African continent are limited, but spaces like the forum on internet freedom in Africa that CIPESA hosts is providing that opportunity for users and other stakeholders to deliberate on these issues. I cannot not say that research and documentation remains important. Of course, we’re a research think tank and we’re always churning out a lot of pages and pages that are not necessarily always read, but I think it’s important because evidence-driven advocacy is critical to this cause. Skills building, again, digital literacy, fact-checking, and information verification, that remains critical, but also leveraging norm-setting mechanisms and raising the visibility of big tech challenges in new end processes, universal periodic review, the Africa Commission of Human Peoples’ Rights. These conversations are not filtering up as much as they should do, so there should be interventions that are focused on that, and interventions that, of course, promote and challenge private sector. to uphold responsibilities and ethics through application of the UN guiding principles on business and human rights. Lastly, is strategic litigation. I think that’s also an opportunity that’s before us in terms of challenging the excesses that big tech poses for elections in the challenging context that I’ve just described. Thank you. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks, Ashna. Thank you very much. Just speaking on two of the topics you spoke about, which is the weaponization of policymaking processes and politically motivated violence, I think that bridges very well with the recent scenario in Brazil, right? With, unfortunately, the repetition or yet another attack on a capital. And after a lot of discussions on a fake news draft bill and regulation for social media companies. Yasmin, I’m gonna bring you in now. Yasmin is from FGV Rio de Janeiro and also the co-coordinator of the DC on platform responsibility. Welcome.

Yasmin Curzi:
Thank you so much, Bruna. Could you please display the slides? Thank you so much. So addressing the first question that Bruna posed to us here, are social media and platforms content moderation shaping democratic elections? I’m sorry. To answer this question, I’d just like to give a brief context about the elections in Brazil, sorry, about the Brazilian legislative scenario regarding platform responsibilities. There are two main pieces of legislation that deal with content moderation issues. Specifically, since 2014, we have the Brazilian Civil Rights Framework, aka Marco Civil da Internet, probably known by many of you here. It establishes our basic principles for internet governance, such as free speech, net neutrality, protection of privacy and personal data. but also established liability regimes for platforms regarding UGC in its article 19 to 21. To sum up really quickly, article 19 created a general regime in which platforms are only liable for illegal UGC content if they not comply with judicial order asking for the removal of a specific content if it is within the platform’s capabilities to do so. There are only two exceptions to this rule, one for copyrights and one for non-authorized intimate imagery dissemination for which a mere notification of the user or their legal representative is surface. The second one is the Code of Consumers Defense, aka CDC, which considers users as hypo-sufficient and vulnerable in their relations with enterprises. In its article 14, CDC establishes an objective liability regime, a strict liability regime, in which enterprises or service providers are responsible regardless of the existence of fault for repairing damages caused to consumers due to defects or insufficient or inadequate information about their risks. So, in this sense, these two pieces of legislation can give users many protections online regarding harmful activities and illegal content. Nevertheless, users are still unprotected of the many online harms that are not clearly illegal, such as disinformation, or that are not even perceived as harm to them, like algorithmic gatekeeping, shadow banning, micro-targeting of problematic content. Regarding the first issue, given the non-existence of a legislation that deals specifically with coordinated disinformation, our Electoral Superior Court has been enacting resolutions to set standards for political campaigns and else. Also, the Electoral Superior Court established in the scope of its Fighting Disinformation Program partnerships with the main platforms in Brazil, such as Meta, Twitter, TikTok, Kuai, WhatsApp, and Google, that sign official agreements stating what their initiatives would be. In these documents, most of them committed with creating reporting channels, labeling content as electoral-related, and redirecting users to the Electoral Court official website and promoting official sources. Instagram and Facebook also developed cute stickers to support users to vote, in spite of voting being already mandatory in Brazil. Nevertheless, we don’t have enough data to see the real impacts of these measures, just the generic data on how much content was removed in a given platform, also generic data on how they are complying with the legislation. This sort of data is offered by the main platforms in Brazil since the establishment of partnership programs with fact-checking agencies in 2018. I’m not saying that they are not removing enough content. What I want to highlight here is that we don’t have data or metrics to understand what this generic number means, nor do we have knowledge on if the content is being removed fast enough to not reach enough users. Furthermore, in fact, some of these efforts to combat falsehood on YouTube, for example, were themselves a risk for democracy and elections in 2022. By the official sources program, this is the slide that is displayed right now, a hyper-partisan news media channel, Jovem Pão, was being actively recommended to YouTube users. To give an example, the election day, Jovem Pão was disseminating a fake audio. allegedly from a famous Brazilian drug dealer, Marcos Camacho, aka Marcola, in which he was supporting Lula’s election. Justice Alexandre de Moraes from the Brazilian Federal Supreme Court, which was presiding the Superior Electoral Court, ordered for the removal of the content, but not before it had already reached 1.7 million visualizations. Supporters also shared this video at least 38 WhatsApp groups and Telegram groups monitored by the fact-checking agency, Ausfatch. So to Bruna’s question, are social media and platforms content moderation shaping democratic elections, I tend to answer no, or at least not significantly, as either we have not significant data, or we don’t have enough information on their actions and results. That’s it. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks a lot, Jasmine. I’m going to bring it to Leah right now as well. Leah is representing Independent Tech, right, and also a fellow Latin American. Yet another region of the world that’s facing a lot of those discussions, right, in terms of proper resources, deployments, and also policymaking as well. So Leah, welcome to the panel.

Lia Hernandez:
Thank you so much, Bruna. Good afternoon. Well, my name is Leah Hernandez. I’m going to talk about mailing the recent and next electoral process in Central America, because politics is a big part of our conversation. I speak very loud, so no. OK, perfect. Well, because Independent Tech is a digital rights organization based in Panama City, but working in all Central America. So I’m going to refer mainly to the recent electoral process in Guatemala and the next electoral process in Panama that will take place in May 2024. And the first thing is that I want to send all my support to the Guatemalan people where they are mobilizing in the streets because they are demanding democracy in their past elections in the country. In Central America, digital platforms make tools available to our electoral public entities because they try to help them to verify the information and to avoid any violation of our digital rights, our fundamental rights as protests, freedom of expression, freedom of press, privacy. But currently in countries such as Panama, my country, a digital media platform and a journalist were ordered to remove information from their platform by the Tribunal Electoral, it’s the Panamanian electoral public entity, and they got a fine because they were posting information about Ricardo Martinelli Verrocall, I don’t know if you know about Ricardo Martinelli, he’s very famous, he’s so famous as Lula and Bolsonaro in Brazil, well, he was a former president of Panama and he’s a candidate for the next elections in Panama because he wants to be president again, and by the way, he’s the most violator of privacy in the country. So the electoral entity in Panama ordered this journalist to remove information about them because it’s against the democracy and it’s against their privacy, their own image. So the question is, if big techs are giving tools to our electoral public entities to promote democracy, to promote access to information, to promote fundamental rights. why electoral entities put barriers to the citizens, to journalists, and to communicators, who their main fulfill is legitimate the duty to inform, the duty to communicate to the citizen what is happening in the countries, and more in these cases of corruption, because this former president is very corrupt. So freedom of expression, freedom of information, and freedom press are limiting in Panama when journalists try to communicate based on the principle of public interest that we have in knowing the good, the bad, or the ugly of our candidates in our electoral process. Digital platform must match their words because with their actions, because even though they don’t have any autonomy in the country, in the decision of the electoral branch, they should not become like part of the problem, and limitate constitutional warranties, such as freedom of press. So mainly this is a very recent case that we are follow in Panama, and thank you so much, Bruna, for just facing this panel.

Moderator – Bruna Martins Dos Santos:
Thanks so much, Lia, very interesting that this kind of like, right, there is an ongoing line of major interferences with expression, with conversations online, and it’s not just like one or two countries, but it’s often the lack of, sometimes it’s the responsiveness, sometimes it’s the ongoing conversation, or the cooperation that social media platforms should have with authorities, and that should be interesting to be developing that, but there are also downsides to those partnerships when it goes towards the path of further requests for data, and access. or even like privacy violations, right? So it is definitely a hard and deep conversation. I’m gonna go now to Dan, Daniel Arnaldo from NDI. Dan, so welcome to the panel as well, and same question as the others.

Daniel Arnaudo:
Yes, thank you. Thanks for having me. Thanks for everyone for being here, and we’re really pleased to be a part of this coalition. For those who don’t know, I’m from the National Democratic Institute. We’re a non-profit, non-partisan, non-governmental organization that works in partnership with groups around the world to strengthen and safeguard democratic institutions, processes, and values to secure a better quality of life for all. We work globally to serve elections, strengthen elections processes, and my work particularly is to support a more democratic information space. And in this work, we engage with platforms around the world, both through coalitions like this or others, such as the Global Network Initiative, the Design for Democracy Coalition. We help highlight issues for platforms. We perform social media monitoring. We engage in consultations on various issues ranging from online violence against women in politics to data access and crisis coordination. I think as was mentioned, 2024 will be a massive year for democracy. And from our perspective, I think we’re particularly concerned about contexts we work in throughout the global majority and particularly small and medium-sized countries that do not receive the same attention in terms of content moderation, policies, research tools, and data access, and many other issues. This is all in the context of, I think, what is a serious disinvestment in civic integrity, trust and safety, and related teams within these organizations. So just in the region, I think you have Bangladesh, Indonesia, India, Pakistan, and Taiwan that will all hold elections in the coming year. I know there will be some resources devoted to larger countries, but on the other hand, they are massive user bases, and the smaller ones are going to receive very little attention at all. So, I think this is a consistent focus for our work and for considerations around these issues. I think one of the main kind of recommendations that I would have would focus around data access. In the context of this disinvestment, I think we’re seeing a serious pullback from access for third-party researchers. We are very concerned about changes in the APIs and in different forms of access to data on the platforms, as I think some of my other panelists have discussed, for research and other purposes, particularly meta and Twitter or X, and continued restrictions in other places. They are building mechanisms for access to traditional academics in certain cases, but not for researchers or broader civil society that live and work in these contexts. They’re often provisioned through mechanisms that are controlled within large countries in the United States or in Europe, and there aren’t really systems in place both for documentation or understanding those systems, and that there are, you know, huge barriers to that kind of access, even when it’s enabled in that sense. So that’s something that I would really urge companies in the private sector and groups such as ours to coordinate around in terms of figuring out ways of ensuring that access in future to shine a light within those contexts. Secondly, I think they’re ignoring major threats to those who make up half or more of their user base, namely women, and particularly those involved in politics, either as candidates, policymakers, or ordinary voters. Research has shown that they face many more threats online, and platforms need to institute mechanisms that can support them both to protect themselves, to understand threats, to report and escalate issues as necessary. We have conducted research that shows both the scale of the problem, but also look to introduce a series of interventions and suggestions for the companies and others that are working to respond to these issues. But I think this is really a global problem that we see in every context we work in globally. And I think many in the room will understand this threat and this issue. Finally, I think there’s a need to consider critical democratic moments and to work within those specific situations. How they can work with the broader community to manage them, not only elections, but major votes or referenda, and also more critical moments such as coups, authoritarian contexts, protests, really critical situations. If they cannot appropriately resource these contexts and situations that they may not have greater understanding of, they at least need to engage with organizations that understand them and help to react and effectively make decisions in these challenging situations. I think retreat from programs such as the Trusted Partners in the case of Meta and a consistent whittling down of their teams that are addressing these issues will have impacts on these places, on elections, on democratic institutions, and ultimately these companies’ bottom lines. The private sector should understand these are not only moral and political issues, but economic ones that will push people away from these spaces as they become hostile or toxic to them in different ways. We understand the trade-offs in terms of profit and organizing systems that are useful for the general public, but we would encourage companies to reflect that the democratic world is integral to the open and vibrant functioning of these platforms. As with 2016 and 2020, 2024 will be a major election year and also likely represent a concomitant paradigm shift in its moderation and information manipulation campaigns and regulation, which is another kind of threat that companies need to consider, and a host of related themes that will have big implications for their profits as well as democracy. I think they are going to ignore these realities at their peril.

Moderator – Bruna Martins Dos Santos:
Thanks a lot, Dan. And also, thanks for highlighting some of the things that are the year of democracy campaign. We issued a document that’s the campaign ask. So some things we would like to require from social media companies, such as streamlining human rights, or even bringing in more mechanisms to protect users, and addressing the problem at the real scale. We are not just saying, like, issue plans for elections. We are also saying, like, deploy the solutions. Invest the money. It’s not just Brazil that matters, but it’s also Brazil, India, Kenya, Tanzania. So that’s what’s really core and relevant about this conversation, for sure. So thanks a lot, everybody. I would like to ask if anyone has any questions for the panelists, or would like to add any thoughts to the conversation. There is a microphone in the middle of the room, so yes.

Audience:
Thank you for giving me some space and ability to express myself. So I’m from Russia. We have, like, a digital election system in Russia. And we are talking about, like, threats which are posed by global media platforms all around the world. Preferably, it’s meta. It’s, like, Facebook, Instagram, and Google, and et cetera, et cetera. But we didn’t talk about cyber threats to these digital election systems. For example, like, two months ago, we had elections all over Russia. And our digital election system was attacked by a denial of service attack by Ukrainian party to disrupt elections. And elections were disrupted for, like, three or four hours. And citizens were not able to actually vote. So this is not something about, like, harming Russia as a state. It is something about harming Russian citizens as citizens. That’s number one problem. The second problem is, I think you have mentioned it before, but I think it’s a little bit deeper. Because we have talked a lot about global media platform involvement in information manipulation, fakes, and disinformation spread, et cetera. But we didn’t talk about global media platform’s position, which is tend to be neutral, but is not always neutral in terms of conflict. Because there are two sides, and sometimes global media platforms choose sides. And what we see and what we talk about a lot is that global media platforms have some very closed, very secret recommendation algorithms, which basically forms the news feed for users. And the situation is that, for example, in some countries in Africa, Facebook, and I think you can approve me, Facebook actually, represent like internet for some people. And Facebook can do a revolution in a click, just altering users’ news feed with their recommendations algorithms. And nobody knows how these algorithms work. And I think internet society, and global international society, and IGF included, should put more pressure on global media platforms for making these algorithms more transparent. Because people should know why they’re seeing this or this content. That’s all. Thank you so much for giving me some time. Thanks a lot. Any other questions? Hello. Thank you for the panel. My name is Laura. I’m from Brazil. I’m here for the youth delegation, but I’m also a researcher at the School of Communication and Media Information at Getulio Vargas Foundation in Brazil as well. And I’d like to hear more about the issue of data access for academic research and civil society research. As a center specialized in monitoring the public debate in social media, we are very concerned with the recent changes mentioned by Arnaldo and mentioned by Yasmin as well, regarding the data access for us. And I’d like to hear more about what kind of tools and mechanisms can the academic community and the civil society community in general access to fight those restrictions and to face these issues, not only in the regulatory sphere where this debate is present, but also in a more broad way. Thank you.

Moderator – Bruna Martins Dos Santos:
Thanks so much, Laura. And the last question?

Audience:
Okay, two points. I’m Alexander from a country in spring of which next year 145 million will elect Vladimir Putin as president. And I have two points. First of all, I would like to thank Timothy about information on the DOS attacks because Russian Central Election Commission didn’t confirm any issues with electronic electoral systems. Unfortunately, such system in Russia was created by Russian big tech Kaspersky, created one system used in Moscow and Rostelecom, which could be considered as a big tech, created another one. Systems completely intransparent, does not comply to the Russian Commission recommendations and other kind of recommendations for digital system. And on my point, a few intended for just faking results. I hope. So if you are interested about such details, please ask me later. But I would like to ask maybe not panel, but everyone. Have somebody participated in elections last time? Thank you. Yeah. Okay. Have you tried to use platforms for your promotion? Okay. Nowadays, I also would like to inform Tim if Facebook is not possible, is not legal to be used in promotions. But before, I’ve created a political activist or political candidate page on Facebook and would like to advertise myself in a constituency with about 20,000 voters. So, I asked Facebook, please make a suggestion, and they suggested me two new contacts for 10 bucks. So, I think in some cases, either platforms don’t understand requirements for candidates, if it’s not presidents, something like, either we need to work with this, either they will want too much money for promotions. Because, okay, if I would create pret-a-porter cakes, maybe two contacts for 10 bucks is reasonable, but not for the one who wants to advertise himself in a constituency. So, I think such work with platforms and platforms helping candidates, especially in restrictive regimes where advertisements on the physical media is no longer possible, is also should

Moderator – Bruna Martins Dos Santos:
be done. Thank you very much. Thanks, Alexander. We have one extra question from the chat that I’m just going to hand out to you guys, and you don’t need to answer all of them, the ones that speak to you the most, I guess. The one that’s on the chat is, what should be done legally when some cross-border digital platforms like Meta refuse to cooperate to national competent authorities regarding cybercrime cases like incitement to violence and promoting pornography for children and private images, and even in serious crimes and refuse to establish official representatives in the country? Rather dense question as well, but… I will give it back to the floor to you guys, and as we’re moving to the very end of the session, you only have 12 more minutes, I would maybe also ask you to, in a tweet, if you could summarize what would be your main recommendation for addressing this so-called global equity crisis in big tech accountability. So I know it’s difficult to summarize that, but if you have a tip, an idea, a pitch for that, it’s very much welcome. I’ll start with you, Ashna.

Ashnah Kalemera:
Thank you, Bruna, and thank you for the very, very rich questions. I think they highlight that this conversation is not limited to elections and misinfo and disinfo or hate speech, but there are very many other aspects around it. The DOS attacks that you pointed out, which speak to tech and the resilience of not just civil society organizations, but even electoral bodies and commissions or entities that are state-owned or run and leverage technology as part of elections, as well as other conversations around accessibility and exclusion, because some of that technology around elections excludes key communities, which brings about apathy and low voter turnout, all of them critical to the conversation around elections. Similarly, the point around positions and the power of these tech companies to literally start revolutions, to borrow your word, I think that, too, is an area that is critical to deliberate more on. The answers are not very immediate. Some of the work that we’ve done in researching how disinfo manifests in varying contexts has highlighted that the agents, the pathways, and the effects vary from one context to another. like I mentioned in the beginning, in contexts where there’s conflicts, religious or border conflict or electoral conflict, the manifestations are always very different, the agents are always very different. So we’re not necessarily pointing a finger only at big tech but I think we are all mindful of the fact that this is a multi-stakeholder conversation that must be had and should be cognizant of all those challenges. There was an issue on research, I think that’s something that we’ve felt on the continent, the inaccessibility of data. Previously at CIPESA we’ve leveraged data APIs, I believe that’s the technical term, to document elections and monitor elections, social media, sentiment analysis and micro-targeting. That capacity is now significantly limited so we’re not able to highlight some of the challenges that emerged during elections around big tech. That’s not to say documentations through stories or humanization would not have the same effect if the access to data is limited. What else did I want to talk about? Now I forget because there were so heavy questions but yes, the conversation is much broader than just elections and big tech alone. We all have a role to play and engaging the least obvious actors like electoral bodies, regional economic blocs and other human rights monitoring or human rights norm-setting mechanisms is also critical to the conversation. Thank you.

Yasmin Curzi:
So, regarding recommendations I think it’s only possible actually to have really real accountability. If we have like specific legislation and regulation of platforms, it’s not possible to have like a multi-stakeholder conversation if we have like the sort of the power symmetries are just too big for us to sit on the same table and discuss with them and talk to them. They set all the rules that are on the table, so it’s not possible to talk to them without regulation. In Brazil, for example, during the elections again, the journalist Patrícia Campos Melo and Renata Gauf asked Facebook how much they were investing, not only Facebook, sorry, Facebook and YouTube, how much they were investing in content moderation in Brazil to see how much they were complying with their own memo agreements that they made with, that they signed with the Superior Electoral Court. And they did not answer, they just said that this was sensitive data. And this is like we are talking about aggregated data of how much they were investing financially to improve their content moderation in Portuguese. So if we don’t have this basic information, if we don’t have like how to assess how much harmful content is being recommended by their platforms, it is quite difficult for us to be able to make proper public policies to address these issues. So I’d just like to display the slides again just to do some propaganda. Sorry, can you display the slides again, just a minute? Just to make a brief propaganda, we have at the DCPR, our Dynamic Coalition on Platform Responsibilities. Our outcome last year was a framework. on meaningful transparency, meaningful and interoperable transparency, with some thoughts for policymakers and regulators worldwide if they want to implement, and also platforms if they are able and eager to improve their best practices, so they also can adopt this framework. And this year, our outcome we are going to release tomorrow also focusing on human rights, risk assessments, and else. So this is our title. It’s like a collaborative paper with best cases and also discussing legislation in India, GSA, GMA, the Brazilian legislation. So we are going to release it tomorrow. Our session is at 8.30. So thanks. I’m sorry for doing the propaganda. I just wanted to show the document. So this is what I would recommend for people to.

Daniel Arnaudo:
Yeah, thanks for the questions. I think certainly algorithmic transparency can be a good thing. You just have to be careful about how you do it. And to create systems to understand the algorithms, I think they can also be gamed in different ways if you have a perfect understanding of them. So it’s a tricky business. I think definitely on need for better protections and systems for smaller candidates in different contexts, it’s a part of the system. It’s not just the individual users and what they’re seeing and how these systems or these networks are being manipulated, but also how candidates can have access to information about political advertising or about even basic registration information. I think every country in the world should have access to the same systems that are used by Meta and by other major companies, Google and others, to promote good political information. And I mean. very basic political information about voting processes, about political campaigns, anywhere in the world. I think on data access, certainly, you’re seeing a revolution right now in terms of how the companies are providing access to their systems, and I think it’s focused on X and Twitter. That has changed the way that any sort of research is being done on those platforms. It’s much more expensive, it’s more difficult to get at. I think companies need to reconsider what they’re doing in terms of revising those systems and making them more difficult for different groups. Meta, in particular, I think will be really critical, so I think we need to work collectively to make sure that they make those kinds of systems like APIs available to as many kinds of people as possible. I think, you know, certainly, there are issues around placing company employees in certain countries around the world, and that can be problematic in certain ways because they could also be authoritarian contexts, and then the employees become bargaining chips, potentially, within certain kinds of regulations that they want to enforce, so you have to be careful about that, but I certainly understand the need to enforce regulations around privacy and content moderations and other issues, so I think it’s something that has to be designed carefully. I think, you know, certainly, there’s a huge crisis in terms of how companies are addressing different contexts, and they need, I think, ultimately, to better staff and resource these issues or these different contexts, so to have people that speak local languages, that understand these contexts, that can respond to issues in reporting, that know what they’re doing, but this is expensive, and I don’t think you’re gonna be able to work your way out of it through AI or something like that, as many have proposed, so I just think it’s something that they need to recognize that reality, or they’re gonna continue to suffer, as, unfortunately, we will all.

Lia Hernandez:
Just one minute, well I think that it’s necessary not just to empower the electoral authority, it’s most necessary to empower citizens, civil society organizations, human rights defenders, activists, because we are really working to promote and to conserve the democracy in countries. So this is my recommendation, and regarding your question about the data, for example in our case, we are working in monitoring digital violence based against candidates in the next election in Panama, and everything is very manual, because the digital platforms, they don’t make available the tools to the civil society, they make available the tools to the government. So we are trying to sign an agreement with the electoral authority to maybe have access to that tools, because it’s necessary to finish the work before the elections, and in another case, the data is not clean, they don’t use open data standards, so we have to find, sometimes guess the information that they have, not upgrading in their websites, so it’s a bit difficult for us to work with this kind of data.

Moderator – Bruna Martins Dos Santos:
Thanks a lot to the four of you, and Alex as well, that is following us directly from the UK, thanks everybody for sticking around as well. If any of this conversation stroke a note with you, go to the yearofdemocracy.org, that’s the website for the Global Coalition for Tech Justice campaign, and have a nice rest of the IGF, thanks a lot.

Alexandra Robinson

Speech speed

142 words per minute

Speech length

954 words

Speech time

402 secs

Ashnah Kalemera

Speech speed

162 words per minute

Speech length

1780 words

Speech time

658 secs

Audience

Speech speed

145 words per minute

Speech length

905 words

Speech time

375 secs

Daniel Arnaudo

Speech speed

172 words per minute

Speech length

1578 words

Speech time

549 secs

Lia Hernandez

Speech speed

132 words per minute

Speech length

744 words

Speech time

338 secs

Moderator – Bruna Martins Dos Santos

Speech speed

181 words per minute

Speech length

1397 words

Speech time

463 secs

Yasmin Curzi

Speech speed

141 words per minute

Speech length

1287 words

Speech time

548 secs