WS #133 Platform Governance and Duty of Care

26 Jun 2025 12:00h - 13:15h

WS #133 Platform Governance and Duty of Care

Session at a glance

Summary

This workshop focused on platform governance and duty of care approaches to regulating online spaces, examining how different jurisdictions are moving beyond traditional self-regulation models to address digital harms like disinformation, hate speech, and coordinated harassment. The discussion brought together perspectives from Europe, Southeast Asia, Brazil, and the Philippines to explore what implementing duty of care frameworks actually means in practice.


Amelie Heldt presented the European approach, highlighting how Germany’s Digital Services Act incorporates trusted flaggers—certified organizations that can flag illegal content for expedited platform review. This creates a hybrid governance model involving state actors, platforms, and civil society organizations, though it raises complex questions about the legal status of private actors performing quasi-governmental functions. Janjira Sombatpoonsiri discussed Southeast Asian contexts, where survey data shows over 65% of experts distrust government regulatory frameworks due to concerns about autocratization and misuse of laws to suppress opposition voices. She advocated for a “pluralist approach” emphasizing participatory fact-checking, community moderation, and multi-stakeholder initiatives to build democratic legitimacy.


Bia Barbosa outlined Brazil’s current situation, where the Supreme Court is reinterpreting intermediary liability while civil society advocates for parliamentary regulation focused on systemic risks rather than individual content removal. The Brazilian Internet Steering Committee has proposed a typology distinguishing between different types of application providers based on their level of interference with content circulation. Yvonne Chua described the Philippines’ shift from punishing users through libel laws toward holding platforms accountable, noting a recent congressional report that explicitly recognizes platforms’ duty of care while warning against franchise requirements that could enable political retaliation.


The discussion revealed common challenges across jurisdictions, including legislative delays, definitional ambiguities around systemic risks, and the need for safeguards against government overreach while maintaining democratic legitimacy in content moderation approaches.


Keypoints

## Major Discussion Points:


– **Diverse Regulatory Approaches to Platform Governance**: The discussion explored different models of duty of care implementation across jurisdictions, from the EU’s Digital Services Act with trusted flaggers, to Brazil’s Supreme Court reinterpretation of intermediary liability, to Southeast Asia’s preference for multi-stakeholder and pluralist approaches due to distrust in government frameworks.


– **Balancing Safety and Freedom of Expression**: A central tension emerged around how to create safer online environments while protecting free speech, with speakers highlighting concerns about over-censorship, government overreach, and the weaponization of content moderation laws against marginalized voices and political opposition.


– **Multi-stakeholder Governance Models**: Several speakers emphasized the importance of involving civil society, fact-checkers, journalists, and other stakeholders in platform governance rather than relying solely on government regulation or platform self-regulation, with examples from Malaysia’s Media Council and collaborative flagging initiatives.


– **Systemic Risk vs. Individual Content Liability**: The discussion revealed different interpretations of duty of care – some focusing on individual content removal and liability (as seen in Brazil’s Supreme Court case), while others emphasized addressing systemic risks through algorithmic transparency, process regulation, and platform design changes.


– **Implementation Challenges and Due Process Safeguards**: Speakers addressed practical concerns about defining systemic risks, ensuring adequate due process protections, preventing abuse by governments, and the need for clear legal frameworks that distinguish between different types of platforms and intermediaries.


## Overall Purpose:


The workshop aimed to examine how the concept of “duty of care” for platforms is being interpreted and implemented across different jurisdictions, sharing real-world experiences from Europe, Southeast Asia, Brazil, and the Philippines to understand what this regulatory approach means in practice and how it can build trust in online spaces while addressing harms like disinformation and hate speech.


## Overall Tone:


The discussion maintained a serious, academic tone throughout, characterized by cautious optimism tempered with significant concerns. While speakers acknowledged the potential benefits of duty of care approaches, the tone was notably apprehensive about implementation challenges, government overreach, and unintended consequences. The conversation was collaborative and constructive, with speakers building on each other’s insights, but there was an underlying urgency about getting the regulatory balance right to avoid both continued online harms and excessive censorship.


Speakers

**Speakers from the provided list:**


– **Beatriz Kira** – Assistant Professor in Law at the University of Sussex, on-site moderator, coordinates research project on platform governance funded by the British Academy


– **Amelie Heldt** – Works at the German Federal Chancery (speaking in personal capacity), affiliated researcher with the Leibniz Institute for Media Research in Hamburg


– **Janjira Sombatpoonsiri** – Political scientist at Chulalongkorn University in Thailand and the German Institute for Global and Area Studies in Hamburg, researches legal tools for tackling disinformation in South and Southeast Asia


– **Bia Barbosa** – Journalist, represents Communication Rights and Democracy organization, civil society representative of the Brazilian Internet Steering Committee


– **Yvonne Chua** – Journalism professor at the University of the Philippines


– **Ivar Hartmann** – Colleague of Beatriz Kira, co-coordinates research project on platform governance


– **Audience** – Various audience members who asked questions during the Q&A session


**Additional speakers:**


– **Phoebe Lee** – Professor at the University of Sussex, online moderator (mentioned but did not speak in transcript)


– **Ramon Costa** – Rapporteur for the workshop (mentioned but did not speak in transcript)


– **David Sullivan** – From the Digital Trust and Safety Partnership (audience member who asked questions)


– **Baron Soka** – Runs Tech Freedom think tank based in the U.S. (audience member who asked questions)


– **Brianna** – (audience member who asked questions, full title/organization not clearly stated)


– **Yuzhe** – Former legal counsel at a social media platform (audience member who asked questions)


– **Andrew Camping** – Trustee with the Internet Watch Foundation (audience member who asked questions)


Full session report

# Platform Governance and Duty of Care: A Cross-Jurisdictional Workshop Report


## Introduction and Context


This workshop examined the evolving landscape of platform governance through the lens of duty of care approaches, bringing together perspectives from Europe, Southeast Asia, Brazil, and the Philippines. The event was moderated by Beatriz Kira, Assistant Professor in Law at the University of Sussex, and co-coordinated with Ivar Hartmann as part of an ongoing research project on platform governance. The workshop built on a previous workshop held in Brazil and a published report, with Phoebe Lee serving as online moderator and Ramon Costa as rapporteur.


The discussion explored how different jurisdictions are moving beyond traditional self-regulation models to address digital harms including disinformation, hate speech, and coordinated harassment. As Kira noted in her opening remarks, the goal was to examine what implementing duty of care frameworks actually means in practice, with particular attention to building trust in online spaces that have become central to democratic participation and social interaction.


## European Approach: The Digital Services Act and Trusted Flaggers


Amelie Heldt, speaking in her personal capacity while affiliated with the German Federal Chancery and the Leibniz Institute for Media Research in Hamburg, presented the European Union’s approach through the Digital Services Act (DSA). The EU model includes a trusted flaggers system under Article 22 of the DSA, where certified organisations can flag illegal content for expedited platform review.


Heldt explained that this approach creates complex legal relationships between state actors, platforms, and civil society organisations, raising questions about the legal status of private actors performing quasi-governmental functions. She posed the critical question: “If private entities become trusted flaggers, are they state authorities now?” This uncertainty extends to whether organisations serving as trusted flaggers should be bound by freedom of expression requirements typically applied to state actors.


Heldt also referenced Germany’s Network Enforcement Act (NetzDG) and acknowledged that while the DSA addresses systemic risks, significant gaps remain. She noted that “there is actually no rule that really goes at the core of the business model of social media platforms,” highlighting a fundamental limitation in addressing the underlying economic incentives that drive harmful platform behaviours.


## Southeast Asian Context: Trust Deficits and Alternative Approaches


Janjira Sombatpoonsiri, a political scientist at Chulalongkorn University in Thailand and the German Institute for Global and Area Studies in Hamburg, presented findings from her research on legal tools for tackling disinformation across Indonesia, Malaysia, Thailand, and the Philippines.


Her survey data revealed that over 65% of experts in Southeast Asia expressed distrust in government regulatory frameworks, primarily due to concerns about autocratisation trends and the potential misuse of laws to suppress opposition voices. This led Sombatpoonsiri to advocate for what she termed a “pluralist approach” to platform governance, emphasising community-driven initiatives including participatory fact-checking, pre-bunking efforts, and collaborative flagging systems.


Sombatpoonsiri highlighted Malaysia’s Media Council as a successful example, bringing together journalists, civil society organisations, and fact-checkers in collaborative content evaluation processes. She argued that such initiatives create greater democratic legitimacy than top-down regulatory approaches, particularly where “enforcement often becomes partisan with laws weaponised against opposition voices under the guise of combating disinformation.”


## Brazilian Developments: Supreme Court Intervention and Multi-Stakeholder Response


Bia Barbosa, representing the Communication Rights and Democracy organisation and serving on the Brazilian Internet Steering Committee, outlined Brazil’s complex regulatory landscape. She explained that the Supreme Court has begun reinterpreting the country’s intermediary liability regime through individual case decisions, with 8 out of 11 justices having voted and the 11th expected to vote that day.


This judicial intervention has created what Barbosa described as an “unclear duty of care framework,” raising concerns about potential strict liability regimes that could lead to increased private censorship. She explained concerns that the proposed duty of care could “result in practice in a change of the civil liability regime of platforms presupposing a duty of immediately removing such content in an automated manner.”


In response, the Brazilian Internet Steering Committee has developed a comprehensive approach distinguishing between different types of application providers based on their level of content interference. They have also launched a public consultation focusing on freedom of expression protection, information integrity, and harm prevention, while preparing technical notes for the Supreme Court that focus on systemic risks rather than individual content decisions.


Barbosa referenced Brazil’s Marco Civil da Internet and Article 19, emphasising the importance of multi-stakeholder regulatory bodies as safeguards against government abuse, arguing that such mechanisms are essential “to prevent abuse and protect freedom of expression regardless of government in power.”


## Philippines Experience: From User Punishment to Platform Accountability


Yvonne Chua, a journalism professor at the University of the Philippines, described her country’s shift from punishing users through cybercrime and libel laws towards holding platforms accountable. She provided stark statistics: over 3,800 cyber libel cases have been filed since 2012, whilst platforms have remained “virtually untouched” by regulatory oversight.


A significant development has been the Congressional Tri-Committee report, which Chua described as a “turning point by explicitly recognising platforms have duty of care that can be regulated.” The report includes 11 recommendations addressing platform regulation, governance oversight, and literacy enforcement.


However, Chua highlighted significant risks with certain regulatory approaches, particularly franchising requirements for platforms. She provided a historical example of regulatory abuse with ABS-CBN, the country’s largest radio and television network, which lost its congressional franchise due to political retaliation rather than media standards concerns. This example illustrates the risk of “turning speech regulation into partisan weapon.”


Despite these concerns, Chua noted that platforms have demonstrated capacity to respond effectively when regulatory expectations are clear, particularly during elections and emergency situations.


## Key Themes and Challenges


### Multi-Stakeholder Governance Approaches


All speakers emphasised the importance of multi-stakeholder approaches, though their implementations varied based on local contexts and trust levels in government institutions. The EU’s trusted flaggers system operates within a formal regulatory framework, while Southeast Asian approaches emphasise community-driven initiatives independent of government frameworks. Brazil and the Philippines both highlighted multi-stakeholder bodies as safeguards against government abuse.


### Trust Deficits and Political Weaponisation


A significant finding was the identification of trust deficits in government-led regulation across different political systems. Speakers from multiple jurisdictions raised concerns about the potential for platform regulation to be weaponised for political purposes, suggesting that regulatory approaches must include explicit safeguards against abuse.


### Systemic Risk versus Individual Content Approaches


The discussion revealed tension between systemic risk management approaches (focusing on platform processes and design features) versus individual content liability models (focusing on specific harmful content and removal obligations). The EU’s DSA and Brazilian Internet Steering Committee proposals represent systemic approaches, while court-led interventions tend toward individual content liability.


## Audience Engagement and Critical Questions


The workshop’s question-and-answer session raised several important issues. David Sullivan from the Digital Trust and Safety Partnership questioned how different legal frameworks define systemic risks and identify harms while avoiding restrictions on legitimate speech. Baron Soka from Tech Freedom asked about due process protections and their effectiveness “in the face of increasingly lawless governments.”


Yuzhe, a former legal counsel at a social media platform, questioned whether current regulations adequately address “core business model issues like attention-grabbing features like endless scrolling that exploit dopamine systems and cause addiction.” Andrew Camping from the Internet Watch Foundation explored whether duty of care frameworks could provide adaptable approaches to emerging technologies.


In response to questions about the DSA’s implementation, Heldt mentioned both the Digital Services Act and Digital Markets Act, noting ongoing developments including the Digital Fairness Act being considered by the European Parliament.


## Conclusions


The workshop revealed that duty of care approaches to platform governance represent a significant evolution from traditional self-regulation models, but face substantial implementation challenges related to political context, institutional trust, and definitional clarity. While speakers from different jurisdictions converged on the importance of multi-stakeholder approaches, their specific implementations varied significantly based on local contexts and trust levels in government institutions.


The identification of trust deficits in government-led regulation across different political systems suggests that building legitimate platform governance frameworks requires careful attention to democratic participation and safeguards against abuse. The discussion also highlighted limitations in current regulatory approaches for addressing fundamental business model issues that drive harmful platform behaviours.


As Kira noted in closing, the workshop demonstrated both the complexity of implementing duty of care frameworks and the importance of continued cross-jurisdictional learning and cooperation in developing effective approaches to platform governance that balance safety concerns with freedom of expression protections.


Session transcript

Beatriz Kira: Good afternoon, everyone. Welcome to our workshop, Platform Governance and Duty of Care, which is co-organised by INSPA, a higher education institution in Brazil, and the University of Sussex in the United Kingdom. My name is Beatriz Quira. I am an Assistant Professor in Law at the University of Sussex and I’m going to be your on-site moderator today. And I’d also like to thank our online moderator, my colleague Phoebe Lee, Professor at the University of Sussex, and our rapporteur, Ramon Costa, who is here today. The central challenge that we are going to gather here to address today is one that defines our digital era in 2025. How to build and maintain trust in the online spaces that have become so central in our lives. The threats that we face are neither abstract nor distant. They include, for example, disinformation campaigns that consume elections, hate speech that spills from screens into the streets and real-world harms, and co-ordinated harassment that could silence marginalised voices. For a long time, the debate on how to tackle these issues have often revolved around inadequate solutions. But with time, the conversation has matured and we have now decisively moved away from relying solely on self-regulation by platforms into a more regulatory model. Around the world, the ground is shifting and we see, for example, the European Union’s Digital Services Act, which addresses systemic risks. We see in the United Kingdom, where I’m based, the Online Safety Act, which pioneers the idea of a duty of care. And we also see in countries in the global south, like Brazil, where the Supreme Court is currently reinterpreting intermediary liability within the Marco Civil da Internati. And what we are seeing emerge here is not only kind of a new approach to regulation, but kind of different approaches altogether, different strategies. And duty of care, which is the topic of our workshop today, is one of them. It represents a departure from traditional intermediary liability models, and rather than solely focusing on what platforms should keep up or take down, it asks what can platforms do to create and to promote a safer environment online. With this shift, there are new opportunities for new spaces for more stakeholders. And, for example, oversight institutions, fact-check organizations, civil society, and users have the opportunity to play new roles. And these are the things that we want to discuss with you today. But I think the crucial question motivating us in this debate is precisely what does adopting a duty of care approach to platform governance actually mean? This question sits at the center of a research project that I coordinate with my colleague Ivar Hartmann, which is funded by the British Academy, and it compares experiences in the UK and in Brazil. Last year, we hosted a multi-stakeholder workshop in Brazil to explore this question, and insights that were discussed led to a publication of a report early this year where you can find the main findings that we published. It is freely available, and I really encourage you to read and to download. And building on that discussion and building on the findings from this report, one of the key issues that we learned is the importance of learning from different jurisdictions, not only Brazil and the UK. and other actors also considering in terms of duty-of-care approaches to platform governance. And this led us to organize this workshop here today and we are very thankful to IGF for hosting us. Today we have five speakers that bring to the discussion something very valuable, which are real experiences from different continents, different countries, different regulatory environments to tell us a little bit more about what it means for a duty-of-care approach to be embedded in platform governance, in platform regulation. So we have representatives with more public sector experience, internet governance bodies, journalism and academia. So just rounding up this introduction and housekeeping for the session, we are going to start with a round-the-table session with each of the speakers, making their initial remarks for around seven minutes. They are going to introduce themselves and make their initial remarks in about seven minutes. And then I’m going to open the floor for discussion, for debate. I really hope that you can engage with us, make this a really engaging conversation and not only me talking to you or them talking at you. So thank you again for being here. Let’s begin. We have the first speaker today, Amelie. Amelie is online with us, joining from Germany, I believe. But Amelie, I’m going to give you the floor. You can introduce yourself and tell us a little bit more about the kind of European perspective to this issue. Thank you.


Amelie Heldt: Thank you. Thank you for having me. Do you hear me well? Yes. Okay, good. Yes, I’m joining from Germany online today, unfortunately. My name is Amelie Helz. I work at the German Federal Chancery, but I’m here today in my personal capacity and as an affiliated researcher with the Leibniz Institute for Media Research in Hamburg. And I’m happy to talk a little bit about the situation in Germany and in Europe just briefly, and then we can go into details later on. So the German situation is that we already have a general duty of care between contractual partners. That’s something that is in the BGB, our civil code here in Germany. And it’s something that has been used in the past to actually test terms and conditions of online platforms specifically also to see to what extent platforms can be bound by freedom of expression. So in the framework of a horizontal effect of freedom of expression. So that’s like the starting point that we had. Then as you’re probably or maybe aware, there was the ECOMOS directive here in Europe. That’s the predecessor law of the DSA. And Germany then adopted in 2017 a law called the Network Enforcement Act, the NetzDG, which obliged platforms to implement a system of notification for users so they could actually flag content as being illegal. Later on or now that we have the DSA, this obviously replaced the NetzDG. And in the DSA, we have much more duties, quite specific duties for platforms to act against illegal content. One thing that I would like to highlight today, what I think would be interesting for the discussion is the role of trusted flaggers under the DSA. Trusted flaggers are in Article 22 DSA. It’s, as you may be aware of, an instrument that has been used by platforms in the past. So we could assume it’s sort of a best practice of online platforms that has then been integrated into the Digital Services Act. And the Digital Services Act contains rules, meaning that the regime of trusted flaggers is the following. They can notify or they can flag content, but only illegal content. So they have to—or it’s under the provision of Article 16 DSA. And how do you become a trusted flagger? You will be—there will be a process of certification with the Digital Services Coordinator, and that’s the main public authority in charge of the Digital Services Act in the respective member state. The flaggers—well, they flag the content, and then there will be a speedy check by the platforms, which then decide whether they want to remove it or not, whether they think it’s illegal or not. So it’s kind of, you could say, a multi-stakeholder instrument. And as I mentioned, it was already in use before the DSA. USA. And the discussion that has been going on around this is the question how to evaluate actually the role of private actors in this sector that are the organizations that will be trusted flaggers under Article 22. In a way, or some people say they act on behalf of the state, so they can be considered state actors, which will change their legal status in a way that they would be bound by other rules and will be closer to state action. And that’s relevant for anything that goes against freedom of expression or against user content. There’s also the question how to deal with organizations that are mainly focused on a private interest, such as intellectual property, that’s often in a context that’s commercially used. So this is the big discussion. Obviously, it helps when it comes to hate speech because the removal will be more speedy. And that’s something that has been advocated for a long time. So I’m happy to dive a bit deeper into this use case later on. But what I wanted to highlight here is, to sum it up, is sort of the hybrid governance that we have here between a law like the Digital Services Act, that’s not only about platforms being compliant and the state in charge of reviewing their compliance, but also private actors, such as NGOs, also that can be funded by the state, taking an active role in the implementation of the DSA. And it makes the whole context more difficult, and it needs to be assessed carefully how to jungle with these new instruments. And it’s quite complicated to untangle the legal relationships between all these actors. So that’s quite specific. I’m happy to also talk about the DSA more generally, but I thought that could be interesting for our discussion.


Beatriz Kira: Thank you so much, Amelie. And it was a really good opening in terms of highlighting not only kind of what’s the model or the kind of the groundings of a bit of care domestically, but also kind of the supranational model of the DSA. And I think we are going to continue in some form of this kind of national and supranational perspective, but now bringing the perspective from South and Southeast Asia. So we have Janjira, who’s going to tell us a little bit more about that. So yes, the floor is yours.


Janjira Sombatpoonsiri: You hear me, yeah? Okay. Thanks so much for having me. And so my name is Janjira. I’m a political scientist working at the University in Thailand, Chulalongkorn University, and the German Institute for Global and Area Studies, based in Hamburg. And so over the past years, I’ve been doing research, collecting data on legal tools used to tackle these information problems in South and Southeast Asia. And at the same time in Thailand, we also survey experts’ views on the impact of disinformation on electoral integrity, social cohesion, and information integrity. And so before we jump right into the conversation about platforms’ duty of care, I just want to begin by sharing some preliminary findings from our survey from the project in Thailand. So the survey actually explores experts’ views, as I said, and we launched the surveys in four Southeast Asian countries, Indonesia, Malaysia, Thailand, and the Philippines. And what is so interesting is that the majority of respondents, around over 65 percent, expressed distrust in government regulatory frameworks. And I’m not really surprised, to be honest. Low trust in government’s genuine commitment to protecting citizens from online harms underpins public skepticism towards state-led regulation of platforms. And I think that’s why we kind of want to move forward to multi-stakeholder approach. And so I find the results quite telling, underscoring how political context shapes platform governance and, you know, the debate on platform governance. And I think when we discuss this issue, we have to look at various forms of threats that online users and citizens face. So it’s not only disinformation in Southeast Asia, but we also have trends in autocratization. And so the issue in the region isn’t whether governments can legislate, but whether they can enforce these laws in ways that uphold the rule of law and due process, right? And so public mistrust is further reinforced by the increasing misuse of such laws. And there are plenty of laws in the region to suppress freedom of expression, as seen in a host number of countries, including Thailand, where I’m from, Cambodia, and especially the Philippines under the Duterte administration. So online critics face harsh penalties, while platforms are pressured to remove politically sensitive content under the guise of combating disinformation. And so my point is, in practice, enforcement often becomes partisan, at least in Southeast Asia. And in the context where today’s opposition might be tomorrow’s government, these laws can be quite conveniently weaponized. My point is, while legal and regulatory tools are important, they are not a silver bullet. And I don’t know if this point would be considered contradictory to a lot of folks at the IGF, but my research shows that public preference leans toward not only a multi-stakeholder approach, but also greater synchronization across initiatives to improve the integrity of online information. And I like to call this approach a pluralist approach, which is slightly divergent from a multi-stakeholder approach, not completely new. So the approach includes not only regulatory frameworks grounded in platforms’ duty of care, but it also moves beyond the legal sphere to encourage participatory fact-checking, structured pre-banking efforts, and community moderation initiatives. So I’m just going to spend the last two and a half minutes giving you some illustrative examples of how this approach has been trialed in the region. I just remind you that each of the cases that we study doesn’t have the comprehensive package of this approach, right? And one example… The sample comes from each country, so it’s just quite fragmented still. Now, regarding the participatory fact-checking, in Malaysia, for example, after the repeal of the Anti-Fake News Act in 2018, journalists and civil society actors began advocating for the creation of the Malaysian Media Council, or the MMC, which is an independent self-regulatory body tasked with handling public complaints, countering disinformation, and engaging with tech companies. And in Myanmar, Thailand, and the Philippines, for example, civil society organizations have collaborated with platforms and parliamentary committees to flag accounts, I think actually an approach very similar to what the previous speakers suggested. So these CSOs have collaborated with platforms and parliamentary committees to flag accounts suspected of coordinating disinformation campaigns. So the idea is not only to flag and target false content, and I highlight content, but also to address a deeper political economic route of online influence operations, and to disrupt their financial incentives. Last but not least, the polarist approach helps ensure the regulatory frameworks are anchored in initiatives driven by broad-based segments of society. And this idea of a coalitional, a pluralist approach to platform governance is so important because at the end of the day, it’s about creating democratic legitimacy for such frameworks for society to adopt, accept, and pursue the goal of combating disinformation together. So I’ll stop here.


Beatriz Kira: Thank you so much, Njeri, and it was really interesting to hear how the relationship of trust between users and the government and users and platforms is now uniform across different settings, different jurisdictions, and how that informs also the governance arrangements, as you mentioned. So if we contrast with what Amelie was telling us about a more regulatory-driven effort, we heard from Njeri now a more civil society-driven, but similar in a way, trusted flagger participatory fact-checking initiative. Thank you so much. Wonderful. So we are going to move on to kind of a round-the-globe overview. We have now, to my right, Bia Barbosa from Brazil, who is going to introduce herself and talk a little bit about kind of the perspective from Brazil, and there’s a lot going on, so it is a challenging position to be in. In seven minutes. Yes. Thank you, Bia.


Bia Barbosa: Thank you so much, Beatriz, and Professor Rivar as well for the invitation for the Brazilian Internet Steering Committee to be here. I know that our member from the Brazilian Internet Steering Committee also helped to organize this activity, so thank you so much for everybody being here. I’m a journalist, and I’m speaking on behalf of a civil society where I belong to that’s called Communication Rights and Democracy, which is a member of a huge coalition, Rights in the Network coalition in Brazil that gathers more than 50 civil society organizations that struggles for different rights online, and a civil society representative of the Brazilian Internet Steering Committee. So I will begin, I’ll try to begin by summarizing a little bit the current state of discussions on due-to-health care in Brazil, focusing mainly on two different perspectives on the table, and then present the contributions that we have at the Brazilian Internet Steering Committee, the CGI.br. The idea of due-to-health care began being discussed in Brazil based on the proposal by the government, inspired by the debates in the UK, which included this concept in the bill on transparency, freedom, and responsibility on the Internet. At that time, in 2023, the government’s proposal was to establish a due-to-health care for social media in relation to the circulation of illegal and harmful content that could promote hate speech, racism, violation of children’s rights, and attacks to the democratic rule of law. It’s important to remember that this happened shortly after an attempt of coup d’Ă©tat that our country suffered after the far-right lost the elections at the end of 2002. As in the British Online Safety Act, social media would not be responsible for individual content, but rather for their responses to the general circulation of such content. At the same time, the bill spoke of the duty of companies to mitigate the systemic risks generated by their services. The presence of these two different concepts in different articles of the text, in my opinion, raised concerns that the proposed duty of care would result in practice in a change of the civil liability regime of platforms presupposing a duty of immediately removing such content in an automated manner. This position has been enforced by different government actors, including the contents of the trial currently taking place in the Brazilian Supreme Court. There are two cases under joint discussion. Today we should hear the opinion of the last justice on that, the 11th justice. The case is Analyze the Liability Regime for Intermediaries. This has been in place since 2014 under the Brazilian Civil Rights Framework, Marco Civil Internet in Brazil, which in its Article 19 states that application providers, all of them, and I will go a little bit further on that, can only be held liable for damage caused by third-party content if they disobey a court order to remove it. The Brazilian Supreme Court already has a majority of votes to change this regime, so eight votes for now out of 11 justices have been cast in this direction, and the content of duty of care has been cited many times, mentioned many times by the justices. What is still unclear by the votes is whether the court will… We are going to talk about how to completely change the liability regime and consider social media as publishers, equivalent to the media outlets, for example, responsible for everything that is published, or whether it would only apply to content, illegal, harmful, we don’t know, that must be proactively removed. As this is a case that deals precisely with civil liability, that is, liability for individual content and not just systemic risks, there is a reinforcement of this interpretation that the concept of duty of care is related to civil liability. And it is also unclear how the Supreme Court will ensure the implementation of its decision, considering that under Brazil law, civil liability must be determined by the courts, and also considering that Brazil does not have a regulatory authority or body to deal with this topic. That is why a significant part of civil society, including organizations that are part of the Rights on the Network coalition, which I belong to, has raised important concerns about the idea of duty of care, especially in light of abuses already committed by platforms that have unduly silenced the voices and struggles of minority groups in Brazil. In a context where they may become legally responsible for this content, there is no doubt that abuses will multiply, and if we are to have, as we hope, a slightly healthier environment in terms of the removal of illegal content, we will also have an environment in which critical voices, journalists and human rights activists and defenders will become even more victims of private censorship. That is why this part of civil society strongly advocates that Brazil move forward with regulation through the Parliament, which can prioritize the regulation of process, algorithms and content moderation mechanisms rather than individual content, in a perspective close to the concept of addressing the systemic risks of this service. We understand that Supreme Court ruling is important in a context where the Brazilian Parliament has been unable to make progress on this issue. The bill I mentioned before was blocked in 2023 by pressure from big tech companies in alliance with the far right that was anything but republican. However, depending on how the court concludes this trial, we may have a situation where this decision is not implemented or, even worse, a worrying impact on all application providers. This is why the Brazilian Internet Steering Committee, which defends the constitutionality of the Article 19, precisely because it’s applicated to all providers, has prepared a technical note proposing a typology for application providers to assist the Supreme Court. This typology is based on the level of intervention of companies in distribution of third-party content and emphasizes the need to modulate the accountability of agents according to their functionalities, proposing appropriate and proportional liability. According to CGI’s technical note, the analysis of Article 19 should observe the following distinction. Application providers whose functionality does not interfere with the circulation of third-party content, those that operate in the Internet as a simple means of transport or storage, such as website hosting or email providers. A second typology would go for applications providers whose functionality has low interference with the circulation of third-party content, such as websites specializing in editing articles and entries. And application providers whose functionality has a high interference on the circulation of third-party content, potentially constituting a risky activity. This interference includes profiling, mass dissemination, algorithmic recommendation, micro-segmentation, strategies to continue engagement, paid content, targeted advertising, among others. So we believe that Supreme Court’s decision should apply to these providers, which are far from being neutral intermediaries. Finally, reinforcing the potential for multistakeholder construction of balanced and democratic regulatory proposals, I’d like to share that looking at the task assigned to the Brazilian Parliament, the Brazilian Internet Steering Committee has also launched a public consultation on principles for the regulation of social media. Three of these principles relate somehow to the idea of duty of care. The protection of freedom of expression, privacy and human rights online, the protection of information integrity, and prevention and harm in accountability. In this sense, and I’m finishing, I promise, in the sense CGI proposed for debate, the idea that social media should make their best effort to prevent and guard against potential harm arising from their activities, especially those arising from the circulation of content, understand that they are responsible for the harm arising from systemic risks inherent to the services provided, and should repair or mitigate them. Damage resulting from systemic risk is understood to be caused by network environment resulting from its policies of transparency, moderation, recommendation and content boosting. So thank you very much once again, and I’ll be happy to go further during the debate.


Beatriz Kira: Thank you so much, Bia. And I think it’s interesting to see how, in theory, the same concept of duty of care has been mobilized differently, not only across jurisdictions, but even within the same jurisdictions, by different kind of stakeholders. So the Supreme Court understanding and interpreting this from one perspective. perspective and more kind of linked to the idea of intermediary liability whereas the efforts that we see or have have saw in the power have seen in the past in terms of legislative efforts being more kind of the systemic approach in a way more similar to the UK and the EU and I think kind of in this in this tune and kind of how Parliament’s have been trying to kind of embed the idea of duty of care in legislation we are now joined online by Yvonne Chua who is going to bring us a perspective from the Philippines so Yvonne thank you so much for being with here with us here today I imagine it’s quite late for you so yes please go ahead and introduce yourself and you have the floor for your remarks thank you.


Yvonne Chua: Don’t worry it’s still evening early evening in the Philippines I’m Yvonne and I teach journalism at the University of the Philippines you know I’m really glad we’re having this conversation because in the Philippines we’ve been stuck for some time for years our lawmakers have been proposing to fight disinformation and online harm by focusing on punishing the users primarily tightening libel laws wrapping up cyber libel and proposing stiffer penalties but we’ve seen where the existing laws have led us citizens and journalists sued and even silenced and the platforms that enable virality amplification virtually untouched that’s why a report by a House or Congressional Tri-Committee released a few weeks ago feels like a turning point for the first time it explicitly recognizes that platforms have a duty of care and that duty can and should be regulated now just a bit of context the Cyber Crime Prevention Act in the Philippines combined with our colonial era libel law have put hundreds of citizens including journalists to libel libel is a crime in the Philippines and more than 3,800 cyber libel cases have been filed since 2012 when the law took effect this figure underscores the significant reliance on criminal libel as a tool against speech online and it doesn’t stop there we have the anti-terrorism law of 2020 for example widening the state’s surveillance powers and introducing dangerously vague speech restrictions so while we jail users no Philippine law holds platforms accountable that brings us to the House or Congressional Tri-Committee on public order information and communications technology and public information after months of hearing with government agencies academe the private sector fact checkers data scientists and and even influencers, the committee recently concluded, and let me quote, our putative toolkit is grossly inadequate and ill-equipped to counter well-funded disinformation ecosystems. The committee proposed something new, at least in the Philippines. Its 11 recommendations addressed three major themes, platform regulation and accountability, governance of oversight and ethics, and literacy and enforcement. In the interest of time, I’ll focus on those that are highly relevant to today’s conversation. First, the committee proposed a review and amendment of the Cybercrime Prevention Act of 2012 to explicitly define social media platforms, prescribe penalties for their participation in content-related offences and incorporate provisions on disclosure of data, parameters for auto-blocking of content and preservation and retention of data. In the case of access to platform data, this is really intended to address the problem when platforms keep refusing to accommodate requests for data by citing American laws, or US privacy laws in particular. Closely related to this is another committee recommendation to enact or pass a new law that would establish a comprehensive legal framework against false or harmful online content. It seeks to authorize government agencies to issue takedown, rectification and block access orders. Now, that’s the problem. While both recommendations emphasize intermediary obligations, the proposed provisions on content blocking or takedown could easily lead to overreach without clear limits. In the Philippines, a similar takedown clause in the Cybercrime Prevention Law was already struck down by our Supreme Court several years ago for bypassing judicial review. We don’t want to have Singapore’s POFMA in the Philippines. As worrisome is the franchise proposal. The Tri-Committee recommended that foreign platforms maintain an in-country office that can be held legally liable. No problem with that. and Yvonne Chua. And for that, it makes sense to clarify jurisdiction and ease enforcement. But the committee also recommended that platforms secure a congressional or legislative franchise, just like traditional broadcasters or certain public utilities, in order to operate in the country. This is deeply problematic. We’ve seen this before, when ABS-CBN, once our country’s largest radio and television network, lost its congressional franchise in 2020 because the allies of then-President Duterte refused to renew its franchise. It wasn’t about media standards. It was political retaliation. So franchising platforms risk turning speech regulation into a partisan weapon, and any accountability bill that includes a franchise requirement must guarantee ironclad due process and protection from political pressure, or better yet, just shelve the whole idea. I just want to point out that can platforms actually meet these obligations? The answer is, of course, yes. They’ve already shown us that they have the capacity. They’ve responded to crises, elections, and coordinated harm in the Philippines and elsewhere. When regulatory expectations are clear, just as they are in the EU, the UK, and several other countries, platforms adjust. They set up task forces, they take down harmful content, illegal content, and they would even tweak their algorithms. So where do we go from here in the Philippines? At noon of June 30, a new set of lawmakers will assume office. So the Tri-Committee’s work right now will be archived, but its findings certainly leave us a path. We can continue to criminalize users and chase trolls, or we can follow the path that it has pointed us to. And finally, hold platforms to account. Duty of care is not a silver bullet, but it shifts burden upstream where the harm begins. We’ve seen it work in other regions, in other countries. We’ve seen it can work here during elections and emergencies, but we need to build in safeguards. We need to make sure our laws protect free expression and human rights, even as we build a healthier, more trustworthy information space. Thank you.


Beatriz Kira: Thank you so much, Yvonne. I think it was heartening to hear from you in terms of how difficult the situation is not only in Brazil. So as you heard from Viva, kind of all the jurisdictions in the Philippines is a great example of kind of trying to kind of balance the very important goals of kind of promoting safety and trust while upholding freedom of expression and not really giving the government of the turn powers to kind of overstep and kind of interfere with this right. But also good to hear from you in terms of kind of, of course there are ways of doing that right, it’s a mix of kind of political will, collaboration, cooperation, compliance from the platforms but it’s not an easy task to put together the legislation or the regulation. Thank you very much for that and with this we’re moving to our final speaker for today, my colleague Ivar Hartmann who is going to bring a little bit more kind of the summary of the challenges that we heard today as we prepared the floor for questions. So just kind of let you know that after Ivar I’m going to open the floor for questions so you can start thinking about the questions you’re going to ask our speakers and you can pick up the different points and ask them to kind of unpack something, yes, so think of your questions while we hear Ivar’s presentation, Ivar the floor is yours.


Ivar Hartmann: Thank you Beatriz, thank you to our speakers who agreed to join us in this important yet complex discussion and thank you for everyone who is joining us in the conversation as well here on site and online. Different workshops and panels here at the IGF have different strategies to making sure that it’s actually a conversation, a debate and not just different people speaking in sort of an isolation, so our approach was that I’ll try to sum up obviously not 100% of all the important comments and contributions but connect, make an effort to connect the realities in all these four jurisdictions that were discussed here, trying to take a shot at interpreting the valuable information and lessons that we heard from all these. for jurisdictions. And I’ll do that by looking at this from two different perspectives. One is, what have been the challenges to creating and enforcing duty of care mechanisms and regulation in different jurisdictions? And what have been the concrete solutions that our speakers have shared with us, have been at least proposed, if not attempted, or have worked in these jurisdictions? One thing that I think unites, if not maybe, the European Union and the jurisdiction that AmĂ©lie so kindly talked to us about, but certainly unites Thailand, Philippines, and Brazil, is that there is a perception by civil society and many stakeholders that there is a delay by Congress to offer, to provide, to actually create a framework that will address these more pressing issues that maybe everyone knows about, how problematic this information is for at least 10 years. But the more current versions of this and how it operates, money trail, that sort of thing, that regulators in these countries have not been quick enough to adapt. That certainly has been the case in Brazil. As Beatriz has told us, basically, the main challenge is that because of the delay by Congress, as soon as the Brazilian Supreme Court steps up and strikes down the current rule for intermediary liability for platforms and offers in its place an unclear description of a duty of care supposed framework, that this will create a fear on platforms and therefore on its users that instead of the old regime, which was not strict liability, it was based on a trigger of there should be a court rule in order for a platform to be responsible for illegal content. In the first case, that court comes in, strikes down that rule, and then what’s left because of a void of Congress legislation, that what’s left is an unclear. framework, and that probably might lead in many cases to strict liability. The solution that is presented to this version of duty of care, which is obviously unwanted I think for everyone here, would be, as Beatriz has told us, a new bill that civil society is backing almost entirely, in its entirety, that would clearly separate types of providers and intermediaries based on their roles, so it should make clear, which is something that it seems the Supreme Court, creating its framework, will not do, separate cloud storage hosts from social media platforms to advertisers, a typology that would make predictable and safe for each company to understand what the law requires and then decide their course of action. Again, in the Philippines and in Thailand, it seems that civil society does not find the current frameworks as useful as they needed to be to ensure safe and protective online content moderation. It seems, as Janjira brought to our attention, a survey of four different Southeast Asian countries has shown that a majority of people distrust government regulatory frameworks, so merely creating a new law such as the DSA in the European Union is clearly not enough. She tells us of a pluralist approach, one that would ensure representation from all segments in society, creating roles, multi-stakeholder roles, embedded in this regulatory framework, because at the end of the day, it is about creating democratic legitimacy for content moderation, if I understood Janjira correctly. She tells us about citizen fact-checking. are working on a self-regulatory body tasked with engaging platforms and handling complaints. So there is an intermediary to the intermediaries and the government and the users. And to my understanding, it seems we have similar concerns also in the Philippines, as Yvonne has told us. It seems legislators have been stuck for many years. Overall, platforms are virtually untouched in either their failure to remove problematic content or when they over-censor. And she brings to us news of this important report by a congressional authority that is basically identifying one of the biggest problems with a version of duty of care framework or government framework, which is the excessive or, let’s say, undue use of government oversight in such a framework, whereby government, as in the past in the Philippines, she tells us has happened, would actually retaliate against digital platforms by forcing them to remove content that’s actually not illegal. So the solution then, the concrete solution, would be to guarantee in the law due process, to also guarantee access to platform data, to do a full review of cybercrime law, to avoid interpretations of libel laws that would mean legal content gets removed, even though it’s legal just under the guise of alleged defamation. And lastly, that would prevent against content blocking that is an overreach by platforms. So in summary, I think there are, of course, not identical challenges, but challenges in these platforms. These four jurisdictions that are very similar in their roots, because as we know, the problematic business model is a worldwide business model of algorithmic recommendation and advertising. And the solutions are not exactly the same. And so the DSA, to the challenge, as AmĂ©lie has brought to us, to the challenge of the question of if you have a duty of care framework and you hope that civil society does fact-checking as well and identifies hate speech as well, well then how do you figure out who becomes a flagger or a trusted flagger? How do you evaluate the role of private actors who take on this important task? Are they a state authority now or can they, for the purposes of the law, be considered a state representative? Because that would mean state action that restricts freedom of speech and so this has immense consequences. And so the solution there has been in the DSA and obviously we don’t have many, many years of enforcement yet, but we have some time of enforcement and just to finalize, and I think this is an important place to end because this is where it’s perhaps most advanced in terms of how far the implementation is going. Government certification for these flaggers such that there are rules on the DSA obviously, as AmĂ©lie has told us, that decide, that establish what needs to happen, what are the rules for someone or a private entity to become a trusted flagger so that we can avoid abuse whenever very specific commercial private interests are involved. Once again, lastly, just thank you to our four speakers and I apologize for any misinterpretation that I might have made here.


Beatriz Kira: Thank you so much, Ivar. I’m going to comment and continue the discussion, but I know that we have key members of the audience that want to ask questions. So I’m going to open the floor for questions on-site and online as well. So please, if you’re online and want to ask a question, use the chat function and then we’re going to be monitoring that. And for people in the room, there is a mic to my left-hand side. So we already have one question, two questions from the floor. Let’s take them in rounds, perhaps. Is that okay? Can you please start with your name and your organization? And let’s try to kind of make the most of this with kind of more quick questions with a question mark at the end. Thank you.


Audience: Thanks. I’m David Sullivan from the Digital Trust and Safety Partnership. We work with companies on best practices and standards for trust and safety. My question for all the speakers is how are the different legal frameworks that have been presented defining systemic risks? And how do you approach trying to identify risks that are not illegal content, but somehow would not wind up restricting access to information and freedom of expression? Thanks. Excellent question. Thank you. Yes, we have another one from the floor. I’m Baron Soka, Tech Freedom. I run a think tank based in the U.S. But I’m really here to thank you for focusing on the potential for abuse. I’m glad we’re talking about that. I would suggest that when it comes to franchising, there are no due process safeguards that could ever make franchising or licensing safe. We’ve seen how governments around the world use franchising and licensing to extort whatever they can get from the licensees. My question is when should we have confidence in due process protections? And what kinds of due process protections do we think will actually work in the face of increasingly lawless governments? The Trump administration clearly doesn’t care. They’ll run roughshod over whatever due process requirements are in place. I’ll give you two concrete examples to maybe frame your question. I have a couple of questions about my question. One, Article 73.2 in the Digital Services Act, I think, doesn’t get enough attention. It says that before the commission can issue a finding of liability, it has to explicitly say what the platform should have done or should now do. So I’d like to hear your thoughts about that as a due process safeguard. And then just to close, in the United States, we’ve had this debate over the Kids Online Safety Act. Republicans have been very clear that they think the bill will be weaponized against transgender content. The bill’s Democratic sponsors think that the safeguards they’ve put in place are adequate. I don’t think so. I want to know what kinds of safeguards, concretely, you think would be adequate to ensure that duties of care can’t be used against particular kinds of content.


Beatriz Kira: Thank you so much.


Audience: Yeah, hi, Brianna. I have a very similar question. Recently, we have done a huge study on the implementation or alignment with the DSA in the EU candidate countries. And it actually turned out, and something that is actually my question, maybe shifting the focus here a bit, but also, like, I understand completely the relevance of due care of platforms, but I also want to kind of shift the focus back to the duty of care of the states. And the previous colleague actually asked more about the safeguards. I’m really curious, because the DSA itself does not have these safeguards for the rule of law and due diligence. So I want to understand better whether your research in your respective countries and the legislative examples actually can provide us some more understanding of the due care of the states when implementing these laws to prevent any forms of abuse that will actually be detrimental not only to the citizens, but broadly, like, democratic setting, right? Leaving aside, for now, the platforms and how it will affect their business models.


Beatriz Kira: Thank you so much.


Audience: Hi, I’m Yuzhe. I worked at one of the social media platforms at legal council before. So what we see, there’s a lot of progress in regulation. I have a feeling that some issues with the platform governance that touched or caused by the core business model of these platforms, like the attention-grabbing platform, such as, for example, endless scrolling, for example, there’s some features that are really exploiting our dofamine system and cause excessive use and addiction, and I didn’t see their current regulation touch much on these kind of core issues, so I’m curious if there’s any thought from panelists, what’s the direction of resolving these issues or addressing these challenges?


Beatriz Kira: Thank you. I’m going to close the floor after the tall gentleman over there, so, yes, last two questions.


Audience: Hi, Andrew Camping, I’m a trustee with the Internet Watch Foundation, but I’m not speaking for them. Firstly, I think it’s really encouraging to hear this discussion, and that there’s at least an appetite for some action in this space when you consider the immense harm that approaches like Section 230 in the States are doing around the world, so this at least shows hope with appropriate guardrails, as the speakers have touched on. My question, though, is an aspect of it that you perhaps haven’t mentioned, which is, I believe that a general duty of care on platforms provides a lot of promise to cover new technological developments which otherwise have to be explicitly mentioned in legislation, and various sessions already this week have talked about how does legislation keep up with the pace of new developments. I think a general duty of care, in my view, is a way of doing that, putting the onus on the platforms to show appropriate risk assessment. Thank you very much for your time and I look forward to hearing your thoughts on this as well. DSA enforcement has become very geopolitically contentious under the Trump administration and I was wondering that in developing regulation in other parts of the world, this is something that is playing a part as well. DSA enforcement has become very geopolitically contentious under the Trump administration and I was wondering that in developing regulation in other parts of the world, this is something that is playing a part as well. Whether you guys feel pressure to align your social media regulation a little bit with the US administration’s take on that, so thanks a lot.


Beatriz Kira: Thank you so much. We have a wide range of questions. Please don’t feel compelled to all of you to answer all of them. You’re very welcome to pick and choose. I’m going to give each panelist, each speaker, two minutes and please stick to two minutes so we can hear from everyone. I know it’s challenging. But again, pick your favourite one or your couple of favourites and let’s start in the same order that we had the panel. Maybe AmĂ©lie Heldt, you can start with your two minutes and make this kind of also the closing remarks. We can hear you.


Amelie Heldt: Yes, can you hear me? Okay, now. Yes, okay, good. Yes, so there are many questions regarding the systemic risk definition. It’s a tricky one. There is a legal definition in Article 34 DSA. However, I mean, this is only one side or one, of course, the definition of the legislature. There are whole PhDs being written on that matter now that I can’t sort of sum up here. I think it’s something we have to look at quite intensively. Also, in the context of the second question regarding safeguards, because anything that is not clearly defined can then also be misused. And there’s a very thin line here that can be crossed. I’d like to address the question of the business model, which I think is a very relevant one. The Digital Services Act actually contains a lot of rules regarding content moderation mainly. And then there is the Digital Markets Act that addresses the market power of big tech, basically. But there is actually no rule that really goes at the core of the business model of social media platforms. That’s why there’s a discussion at the European Parliament right now on a Digital Fairness Act or an act that would actually look closer at addictive design of online platforms or of apps. But that’s very much under discussion and it’s difficult. And that bridges or goes to the last question in the actual context where there’s a lot of legislation coming out of Europe and of the EU. And we are thriving to not only be the regulators but also to develop tech ourselves. And so a lot of people are asking for less regulation. So I’m not sure we’ll see much more regulation in that space over the next month. We have to start implementing what we have right now first.


Beatriz Kira: Thank you. Thank you so much, Amélie. And thank you for sticking to a couple of minutes. Ginger, the floor is yours.


Janjira Sombatpoonsiri: All right. So many great questions. I’ll just speak broadly because of the two-minute limit. Now, I think there are two issues here I want to address. One is risks, right? In Southeast Asia and specifically the ASEAN, I mean, the sort of collective efforts to curb disinformation is very limited, unlike the EU. So far, there are limited frameworks to govern platforms together. Individually, in each country, there are different definitions of what poses risk in online space, in platforms. And that can get tricky because these notions can be very vague. In nonpolitical issues like public health, child pornography, gambling, violent graphics, these can be clearly identified and agreed on across different countries. But when it comes to political issues, each country comes up with its own identification or definition of what constitutes risk. And this, again, varies across political contexts. In Thailand, and I think we would go to the Philippines as well, Cambodia, Vietnam, the notion of anti-state, the notion of defamation of political elites are part of risk in online system. And therefore, it leads to abuse by state actors. to impose laws supposed to curb disinformation. So I think the definition when it comes to a collective identification of what constitutes risk is still fragmented in the region and I think it leaves room for politicization. Now… Oh, that’s it. Okay.


Beatriz Kira: That’s it. I mean, hold a thought because we can kind of follow up during lunch maybe, if you want to join us. Yes. Via, please.


Bia Barbosa: Yeah, I’m not going further on the problem of the lack of definition because I think it’s everywhere and not only regarding duty of care, but regarding risk assessment, systemic risks and everything. So this is one of a pretty important step of regulation that has to be debated and has to consider each situation and the context of each country. Regarding the due process, the provision that we had regarding due process in the Brazilian Bill on Transparency and Responsibility of Internet had to do with the right to information about moderation of each one content and the right to appeal to that moderation. I think that it’s at least something that we have to guarantee in any regulation that deals with social media platforms. And there are risks, but clearly I think that the situation now is not working. We have pretty much evidence on the negative impact. It’s enormous. So, of course, each country needs to consider its own reality, including the strength or the weakness of its democratic institutions to propose, determine specific things. But one way that I think that could help regulators to not abuse their power is to make this a multi-stakeholder process. Brazil has models of deliberative councils that establish rules for the implementations of law or public policies. So that’s why we, as a civil society perspective in Brazil, not the Brazilian Internet Steering Committee in general, because we have not arrived at that point, we understand that any Brazilian authority that takes on this task must necessarily have a multi-stakeholder body to ensure the protection of freedom of expression and the quality of the implementation of the law, independently of the governments in power.


Beatriz Kira: Wonderful. Thank you so much. I think to close us, Yvonne, you have the floor to answer questions and final remarks in a couple of minutes. Thank you.


Yvonne Chua: Yeah, this is so difficult. I know, I’m sorry. We don’t really have a definition, we really don’t have a definition of systemic risk. And I wouldn’t be surprised if our lawmakers would be looking to the EU’s Digital Services Act for inspiration, but you know, they really have to because our lawmakers are really so bad at defining things, including fake news, which is broadly and crudely defined, and it was a revelation during the congressional hearings that, you know, academics, fact-checkers, and everyone had to take time out just to explain to them the difference between misinformation, disinformation, and mal-information. It took like hours just to get the terminology right. As far as due process is concerned, things get so messed up in the Philippines, especially with the implementation of laws, and that’s why we’re very grateful that we have a strong independent judiciary that we can, you know, rely on whenever things get rough. About the general duty of care for platforms, the provision, this is really something that the Tri-Committee is trying to work towards, because the recommendations cover not only existing problems, but also AI and emerging technologies. So perhaps a well-crafted legal framework would be able to address that.


Beatriz Kira: Thank you. Thank you so much, and thank you. I just wanted to see if Iva wants some comments in one minute. I have less than others, but yes, just take the moment to thank everyone for participating. I do want to take on the questions about general duty of care, specific duties from the gentleman from IWF, but we can do it offline. So for the time being, we can please join me in thanking our panellists, our speakers and members of the audience for your engagement. We do hope to continue this conversation around duty of care and platform governance. I mean, I’m speaking on my behalf, and perhaps Iva’s, as we continue to work in our research projects. But I think, and I’m pleased. Please correct me if I’m wrong, all the panellists are also going to be looking at issues around platform governance and regulation and duty of care, so reach out to us if you want to continue this conversation in your other spaces, and thank you IJF for having us. Have a really good afternoon and see you next time. Phoebe, AmĂ©lie, and can you stay for one minute so we can take a photo? Yeah, stay online, we can have a photo of the panel. Thank you. Good Morning, welcome to Ivar’s Gap Kitchen! Best bread StraĂŸenburg will Taylor next! Bye!


A

Amelie Heldt

Speech speed

124 words per minute

Speech length

1081 words

Speech time

520 seconds

Hybrid governance model combining DSA compliance with private actors like NGOs taking active roles in implementation creates complex legal relationships

Explanation

The Digital Services Act creates a governance system that involves not just platforms and state authorities, but also private actors such as NGOs that can be funded by the state. This multi-layered approach makes it difficult to untangle the legal relationships between all these actors and requires careful assessment of new instruments.


Evidence

Trusted flaggers system under Article 22 DSA where organizations get certified by Digital Services Coordinators to flag illegal content for speedy platform review


Major discussion point

Platform Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Agreed with

– Janjira Sombatpoonsiri
– Bia Barbosa

Agreed on

Multi-stakeholder governance approach is essential for legitimate platform regulation


Disagreed with

– Janjira Sombatpoonsiri
– Bia Barbosa

Disagreed on

Role of government vs. multi-stakeholder approach in platform regulation


Trusted flaggers system under DSA Article 22 allows certified organizations to flag illegal content for speedy platform review

Explanation

Under the Digital Services Act, trusted flaggers are certified through a process with Digital Services Coordinators and can notify platforms about illegal content. The platforms then conduct speedy checks to decide whether to remove the flagged content or not.


Evidence

Article 22 DSA provisions and the certification process with Digital Services Coordinators as the main public authority in charge


Major discussion point

Duty of Care Implementation and Challenges


Topics

Legal and regulatory | Sociocultural


Legal evaluation needed for private actors serving as trusted flaggers to determine if they act as state representatives bound by freedom of expression rules

Explanation

There is ongoing discussion about whether organizations serving as trusted flaggers should be considered state actors, which would change their legal status and make them bound by different rules. This is particularly relevant for content decisions that could impact freedom of expression.


Evidence

Discussion around organizations focused on private interests like intellectual property in commercial contexts


Major discussion point

Content Moderation and Freedom of Expression Balance


Topics

Human rights | Legal and regulatory


DSA addresses systemic risks but questions remain about evaluating private actors’ roles in content moderation

Explanation

While the Digital Services Act contains provisions for addressing systemic risks, there are still unresolved questions about how to properly evaluate and regulate the role of private actors in the content moderation ecosystem.


Major discussion point

Systemic Risk and Business Model Concerns


Topics

Legal and regulatory | Sociocultural


J

Janjira Sombatpoonsiri

Speech speed

118 words per minute

Speech length

1030 words

Speech time

522 seconds

Multi-stakeholder approach with greater synchronization across initiatives is preferred over purely regulatory solutions due to public distrust in government frameworks

Explanation

Research shows that the majority of respondents express distrust in government regulatory frameworks, leading to preference for a pluralist approach that includes not only regulatory frameworks but also participatory fact-checking, pre-bunking efforts, and community moderation initiatives. This approach aims to create democratic legitimacy for platform governance.


Evidence

Survey in four Southeast Asian countries (Indonesia, Malaysia, Thailand, Philippines) showing over 65% of experts expressed distrust in government regulatory frameworks


Major discussion point

Platform Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights | Sociocultural


Agreed with

– Amelie Heldt
– Bia Barbosa

Agreed on

Multi-stakeholder governance approach is essential for legitimate platform regulation


Disagreed with

– Amelie Heldt
– Bia Barbosa

Disagreed on

Role of government vs. multi-stakeholder approach in platform regulation


Pluralist approach including participatory fact-checking, pre-bunking efforts, and community moderation creates democratic legitimacy

Explanation

The pluralist approach moves beyond the legal sphere to encourage various forms of community engagement in content moderation. This includes participatory fact-checking, structured pre-bunking efforts, and community moderation initiatives that help create broader social acceptance of platform governance frameworks.


Evidence

Examples from Malaysia (Malaysian Media Council), Myanmar, Thailand, and Philippines where civil society organizations collaborate with platforms and parliamentary committees to flag accounts suspected of coordinating disinformation campaigns


Major discussion point

Duty of Care Implementation and Challenges


Topics

Sociocultural | Human rights | Legal and regulatory


Over 65% of experts in Southeast Asian survey expressed distrust in government regulatory frameworks due to autocratization trends

Explanation

The survey results reveal significant public skepticism towards state-led regulation of platforms, which is underscored by low trust in government’s genuine commitment to protecting citizens from online harms. This distrust is reinforced by trends in autocratization across the region.


Evidence

Survey data from Indonesia, Malaysia, Thailand, and the Philippines showing majority distrust in government frameworks


Major discussion point

Trust and Legitimacy in Platform Governance


Topics

Human rights | Legal and regulatory


Agreed with

– Bia Barbosa
– Yvonne Chua
– Ivar Hartmann

Agreed on

Current regulatory frameworks are inadequate and need comprehensive reform


Enforcement often becomes partisan with laws weaponized against opposition voices under guise of combating disinformation

Explanation

In practice, enforcement of platform governance laws often becomes partisan, with laws being used to suppress freedom of expression. Online critics face harsh penalties while platforms are pressured to remove politically sensitive content under the guise of combating disinformation.


Evidence

Examples from Thailand, Cambodia, and the Philippines under the Duterte administration where laws were misused to suppress opposition voices


Major discussion point

Content Moderation and Freedom of Expression Balance


Topics

Human rights | Legal and regulatory


Agreed with

– Bia Barbosa
– Yvonne Chua

Agreed on

Risk of government overreach and weaponization of platform regulation laws


B

Bia Barbosa

Speech speed

146 words per minute

Speech length

1617 words

Speech time

661 seconds

Supreme Court is changing intermediary liability regime with unclear duty of care framework, creating concerns about strict liability and private censorship

Explanation

The Brazilian Supreme Court is striking down the current intermediary liability rule from the Marco Civil da Internet and replacing it with an unclear duty of care framework. This creates fear that platforms will face strict liability, leading to over-censorship and silencing of minority voices and activists.


Evidence

Eight out of 11 justices have voted to change the regime; Article 19 of Marco Civil da Internet currently requires court order for platform liability; concerns about abuses already committed by platforms against minority groups


Major discussion point

Platform Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Agreed with

– Janjira Sombatpoonsiri
– Yvonne Chua
– Ivar Hartmann

Agreed on

Current regulatory frameworks are inadequate and need comprehensive reform


Disagreed with

– Yvonne Chua

Disagreed on

Approach to intermediary liability and duty of care implementation


Typology of application providers based on level of interference with content circulation is needed for proportional liability

Explanation

The Brazilian Internet Steering Committee proposes a classification system that distinguishes between different types of providers based on their functionality and level of intervention in content distribution. This would ensure appropriate and proportional liability rather than a one-size-fits-all approach.


Evidence

CGI’s technical note proposing three categories: providers with no interference (hosting, email), low interference (websites), and high interference (algorithmic recommendation, micro-segmentation, targeted advertising)


Major discussion point

Duty of Care Implementation and Challenges


Topics

Legal and regulatory | Economic


Civil society advocates for multi-stakeholder regulatory bodies to prevent abuse and protect freedom of expression regardless of government in power

Explanation

Civil society organizations propose that any Brazilian authority implementing platform regulation should have a multi-stakeholder body to ensure protection of freedom of expression and quality implementation. This approach draws on Brazil’s existing models of deliberative councils for public policy implementation.


Evidence

Brazil’s existing models of deliberative councils that establish rules for implementation of laws or public policies; Rights on the Network coalition with over 50 civil society organizations


Major discussion point

Trust and Legitimacy in Platform Governance


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Amelie Heldt
– Janjira Sombatpoonsiri

Agreed on

Multi-stakeholder governance approach is essential for legitimate platform regulation


Disagreed with

– Amelie Heldt
– Janjira Sombatpoonsiri

Disagreed on

Role of government vs. multi-stakeholder approach in platform regulation


Concerns about platforms becoming legally responsible leading to increased private censorship of minority groups and activists

Explanation

There are significant concerns that if platforms become legally responsible for content under an unclear duty of care framework, they will engage in over-censorship to avoid liability. This would particularly harm minority groups, journalists, and human rights activists who are already victims of private censorship.


Evidence

Evidence of abuses already committed by platforms that have unduly silenced voices and struggles of minority groups in Brazil


Major discussion point

Content Moderation and Freedom of Expression Balance


Topics

Human rights | Sociocultural


Agreed with

– Janjira Sombatpoonsiri
– Yvonne Chua

Agreed on

Risk of government overreach and weaponization of platform regulation laws


Brazilian Internet Steering Committee proposes principles focusing on systemic risks from network environment policies rather than individual content

Explanation

The CGI proposes that social media should make best efforts to prevent harm from systemic risks inherent to their services, focusing on policies of transparency, moderation, recommendation and content boosting rather than individual content liability. This approach emphasizes the network environment created by platform policies.


Evidence

CGI’s public consultation on principles for social media regulation including protection of freedom of expression, information integrity, and prevention of harm and accountability


Major discussion point

Systemic Risk and Business Model Concerns


Topics

Legal and regulatory | Sociocultural


Y

Yvonne Chua

Speech speed

136 words per minute

Speech length

1109 words

Speech time

487 seconds

Congressional Tri-Committee report represents turning point by explicitly recognizing platforms have duty of care that can be regulated

Explanation

For the first time in the Philippines, a Congressional Tri-Committee report explicitly recognizes that platforms have a duty of care that can and should be regulated. This represents a significant shift from previous approaches that focused primarily on punishing users through libel laws.


Evidence

House Tri-Committee on public order, information and communications technology, and public information released report with 11 recommendations after months of hearings with various stakeholders


Major discussion point

Platform Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Disagreed with

– Bia Barbosa

Disagreed on

Approach to intermediary liability and duty of care implementation


Platform regulation should focus on disclosure requirements, data access, and comprehensive legal framework rather than just user punishment

Explanation

The Tri-Committee recommends amending cybercrime laws to define social media platforms, prescribe penalties for their participation in content-related offenses, and incorporate provisions on data disclosure and content blocking parameters. This shifts focus from criminalizing users to holding platforms accountable.


Evidence

Over 3,800 cyber libel cases filed since 2012; platforms refusing data requests citing US privacy laws; recommendation for platforms to maintain in-country offices


Major discussion point

Duty of Care Implementation and Challenges


Topics

Legal and regulatory | Human rights


Franchising platforms risks turning speech regulation into partisan weapon, as seen with ABS-CBN case under Duterte administration

Explanation

The committee’s recommendation for platforms to secure congressional franchises is problematic because it could be used for political retaliation. The ABS-CBN case demonstrates how franchising can be weaponized when allies of President Duterte refused to renew the network’s franchise in 2020.


Evidence

ABS-CBN, once the country’s largest radio and television network, lost its congressional franchise in 2020 due to political retaliation by Duterte allies


Major discussion point

Trust and Legitimacy in Platform Governance


Topics

Human rights | Legal and regulatory


Agreed with

– Janjira Sombatpoonsiri
– Bia Barbosa

Agreed on

Risk of government overreach and weaponization of platform regulation laws


Existing cybercrime and libel laws have led to over 3,800 cases since 2012 while platforms remain virtually untouched

Explanation

The Philippines has relied heavily on criminal libel combined with the Cyber Crime Prevention Act to address online harm, resulting in thousands of cases against users. Meanwhile, the platforms that enable virality and amplification have faced no accountability under Philippine law.


Evidence

Over 3,800 cyber libel cases filed since the Cyber Crime Prevention Act took effect in 2012; libel is a crime in the Philippines; anti-terrorism law of 2020 expanded surveillance powers


Major discussion point

Content Moderation and Freedom of Expression Balance


Topics

Human rights | Legal and regulatory | Cybersecurity


Agreed with

– Janjira Sombatpoonsiri
– Bia Barbosa
– Ivar Hartmann

Agreed on

Current regulatory frameworks are inadequate and need comprehensive reform


Platforms have demonstrated capacity to respond to crises and adjust when regulatory expectations are clear

Explanation

Platforms have shown they can meet regulatory obligations when expectations are clear, as evidenced by their responses to crises, elections, and coordinated harm in the Philippines and other countries. They adjust by setting up task forces, removing harmful content, and tweaking algorithms when needed.


Evidence

Platform responses during elections and emergencies in the Philippines; adjustments made in EU, UK and other countries with clear regulatory frameworks


Major discussion point

Systemic Risk and Business Model Concerns


Topics

Legal and regulatory | Sociocultural


I

Ivar Hartmann

Speech speed

125 words per minute

Speech length

1207 words

Speech time

574 seconds

Delay by Congress in multiple jurisdictions has led to unclear frameworks and potential for abuse

Explanation

There is a common pattern across Thailand, Philippines, and Brazil where Congress has been slow to create adequate frameworks for addressing platform governance issues. This delay has led to other institutions stepping in with unclear or potentially problematic solutions, such as Brazil’s Supreme Court creating an unclear duty of care framework.


Evidence

Brazilian Supreme Court stepping in due to Congressional delay; similar delays noted in Philippines and Thailand; problematic business models known for about 10 years without adequate regulatory response


Major discussion point

Platform Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Agreed with

– Janjira Sombatpoonsiri
– Bia Barbosa
– Yvonne Chua

Agreed on

Current regulatory frameworks are inadequate and need comprehensive reform


Solutions vary across jurisdictions but address similar challenges from algorithmic recommendation and advertising business models

Explanation

While the specific solutions differ across the four jurisdictions discussed, they all address similar root challenges stemming from the worldwide problematic business model of algorithmic recommendation and advertising. The solutions include different approaches to multi-stakeholder governance, due process protections, and regulatory frameworks.


Evidence

DSA’s trusted flagger certification system; Brazil’s proposed typology of providers; Philippines’ comprehensive legal framework recommendations; Southeast Asia’s pluralist approach


Major discussion point

Duty of Care Implementation and Challenges


Topics

Legal and regulatory | Economic | Sociocultural


B

Beatriz Kira

Speech speed

161 words per minute

Speech length

2048 words

Speech time

759 seconds

Workshop aims to build trust in online spaces central to digital era by moving beyond inadequate self-regulation solutions

Explanation

The central challenge of the digital era in 2025 is building and maintaining trust in online spaces that have become central to our lives. The conversation has matured from relying solely on platform self-regulation to more comprehensive regulatory models that address threats like disinformation, hate speech, and coordinated harassment.


Evidence

Examples of regulatory shifts including EU’s Digital Services Act addressing systemic risks, UK’s Online Safety Act pioneering duty of care, and Brazil’s Supreme Court reinterpreting intermediary liability


Major discussion point

Trust and Legitimacy in Platform Governance


Topics

Legal and regulatory | Human rights | Sociocultural


A

Audience

Speech speed

151 words per minute

Speech length

940 words

Speech time

371 seconds

Current regulations don’t adequately address core business model issues like attention-grabbing features and addictive design

Explanation

An audience member with platform legal experience noted that current regulations don’t touch on core business model issues that cause platform governance problems. Features like endless scrolling and other attention-grabbing mechanisms exploit dopamine systems and cause excessive use and addiction.


Evidence

Examples of endless scrolling and features exploiting dopamine systems causing addiction and excessive use


Major discussion point

Systemic Risk and Business Model Concerns


Topics

Economic | Human rights | Sociocultural


Agreements

Agreement points

Multi-stakeholder governance approach is essential for legitimate platform regulation

Speakers

– Amelie Heldt
– Janjira Sombatpoonsiri
– Bia Barbosa

Arguments

Hybrid governance model combining DSA compliance with private actors like NGOs taking active roles in implementation creates complex legal relationships


Multi-stakeholder approach with greater synchronization across initiatives is preferred over purely regulatory solutions due to public distrust in government frameworks


Civil society advocates for multi-stakeholder regulatory bodies to prevent abuse and protect freedom of expression regardless of government in power


Summary

All three speakers emphasize that effective platform governance requires involvement of multiple stakeholders including civil society, NGOs, and private actors, rather than relying solely on government regulation or platform self-regulation


Topics

Legal and regulatory | Human rights | Sociocultural


Current regulatory frameworks are inadequate and need comprehensive reform

Speakers

– Janjira Sombatpoonsiri
– Bia Barbosa
– Yvonne Chua
– Ivar Hartmann

Arguments

Over 65% of experts in Southeast Asian survey expressed distrust in government regulatory frameworks due to autocratization trends


Supreme Court is changing intermediary liability regime with unclear duty of care framework, creating concerns about strict liability and private censorship


Existing cybercrime and libel laws have led to over 3,800 cases since 2012 while platforms remain virtually untouched


Delay by Congress in multiple jurisdictions has led to unclear frameworks and potential for abuse


Summary

Speakers agree that existing regulatory approaches have failed to adequately address platform governance challenges, with laws either being too weak to hold platforms accountable or too broad and leading to abuse of user rights


Topics

Legal and regulatory | Human rights


Risk of government overreach and weaponization of platform regulation laws

Speakers

– Janjira Sombatpoonsiri
– Bia Barbosa
– Yvonne Chua

Arguments

Enforcement often becomes partisan with laws weaponized against opposition voices under guise of combating disinformation


Concerns about platforms becoming legally responsible leading to increased private censorship of minority groups and activists


Franchising platforms risks turning speech regulation into partisan weapon, as seen with ABS-CBN case under Duterte administration


Summary

All three speakers from developing countries share concerns about how platform regulation can be misused by governments to suppress opposition voices and silence minority groups, emphasizing the need for strong safeguards


Topics

Human rights | Legal and regulatory


Similar viewpoints

Both speakers advocate for differentiated approaches to platform regulation that distinguish between different types of providers and focus on platform accountability rather than user criminalization

Speakers

– Bia Barbosa
– Yvonne Chua

Arguments

Typology of application providers based on level of interference with content circulation is needed for proportional liability


Platform regulation should focus on disclosure requirements, data access, and comprehensive legal framework rather than just user punishment


Topics

Legal and regulatory | Human rights


Both speakers support systems that involve civil society organizations in content moderation processes, though through different mechanisms – formal certification in the EU versus collaborative approaches in Southeast Asia

Speakers

– Amelie Heldt
– Janjira Sombatpoonsiri

Arguments

Trusted flaggers system under DSA Article 22 allows certified organizations to flag illegal content for speedy platform review


Pluralist approach including participatory fact-checking, pre-bunking efforts, and community moderation creates democratic legitimacy


Topics

Legal and regulatory | Sociocultural | Human rights


Both recognize that platforms have the technical capacity to implement changes but current regulations fail to address fundamental business model problems that drive harmful behaviors

Speakers

– Yvonne Chua
– Audience

Arguments

Platforms have demonstrated capacity to respond to crises and adjust when regulatory expectations are clear


Current regulations don’t adequately address core business model issues like attention-grabbing features and addictive design


Topics

Legal and regulatory | Economic | Sociocultural


Unexpected consensus

Trust deficit in government-led regulation across different political systems

Speakers

– Janjira Sombatpoonsiri
– Bia Barbosa
– Yvonne Chua

Arguments

Over 65% of experts in Southeast Asian survey expressed distrust in government regulatory frameworks due to autocratization trends


Civil society advocates for multi-stakeholder regulatory bodies to prevent abuse and protect freedom of expression regardless of government in power


Franchising platforms risks turning speech regulation into partisan weapon, as seen with ABS-CBN case under Duterte administration


Explanation

Despite representing different countries with varying political systems (Thailand, Brazil, Philippines), all three speakers independently identified similar patterns of government overreach and public distrust in state-led platform regulation, suggesting this is a global rather than region-specific challenge


Topics

Human rights | Legal and regulatory | Sociocultural


Need for systemic rather than content-focused approaches to platform governance

Speakers

– Amelie Heldt
– Bia Barbosa
– Yvonne Chua

Arguments

DSA addresses systemic risks but questions remain about evaluating private actors’ roles in content moderation


Brazilian Internet Steering Committee proposes principles focusing on systemic risks from network environment policies rather than individual content


Congressional Tri-Committee report represents turning point by explicitly recognizing platforms have duty of care that can be regulated


Explanation

Speakers from very different regulatory contexts (EU, Brazil, Philippines) converged on the importance of addressing systemic risks and platform design rather than focusing solely on individual content decisions, indicating a global shift in thinking about platform governance


Topics

Legal and regulatory | Sociocultural


Overall assessment

Summary

Strong consensus emerged around the need for multi-stakeholder governance approaches, the inadequacy of current regulatory frameworks, and concerns about government overreach. Speakers also agreed on the importance of addressing systemic risks rather than just individual content, and the need for differentiated approaches to different types of platforms.


Consensus level

High level of consensus on fundamental principles despite representing different jurisdictions and regulatory contexts. This suggests these challenges are global in nature and that there are emerging best practices that transcend regional differences. The implications are significant as it indicates potential for international cooperation and learning across jurisdictions in developing more effective platform governance frameworks.


Differences

Different viewpoints

Role of government vs. multi-stakeholder approach in platform regulation

Speakers

– Amelie Heldt
– Janjira Sombatpoonsiri
– Bia Barbosa

Arguments

Hybrid governance model combining DSA compliance with private actors like NGOs taking active roles in implementation creates complex legal relationships


Multi-stakeholder approach with greater synchronization across initiatives is preferred over purely regulatory solutions due to public distrust in government frameworks


Civil society advocates for multi-stakeholder regulatory bodies to prevent abuse and protect freedom of expression regardless of government in power


Summary

Amelie presents the EU’s hybrid model as a working solution with government certification of trusted flaggers, while Janjira argues for a pluralist approach due to distrust in government frameworks in Southeast Asia. Bia supports multi-stakeholder bodies but within a regulatory framework to prevent government abuse.


Topics

Legal and regulatory | Human rights | Sociocultural


Approach to intermediary liability and duty of care implementation

Speakers

– Bia Barbosa
– Yvonne Chua

Arguments

Supreme Court is changing intermediary liability regime with unclear duty of care framework, creating concerns about strict liability and private censorship


Congressional Tri-Committee report represents turning point by explicitly recognizing platforms have duty of care that can be regulated


Summary

Bia expresses strong concerns about Brazil’s Supreme Court creating unclear duty of care frameworks that could lead to over-censorship, while Yvonne views the Philippines’ Congressional recognition of platform duty of care as a positive turning point.


Topics

Legal and regulatory | Human rights


Unexpected differences

Effectiveness of regulatory vs. civil society approaches

Speakers

– Amelie Heldt
– Janjira Sombatpoonsiri

Arguments

Hybrid governance model combining DSA compliance with private actors like NGOs taking active roles in implementation creates complex legal relationships


Multi-stakeholder approach with greater synchronization across initiatives is preferred over purely regulatory solutions due to public distrust in government frameworks


Explanation

Unexpectedly, the EU representative (Amelie) presents a more government-integrated approach while the Southeast Asian representative (Janjira) advocates for approaches that bypass government frameworks entirely. This reverses typical expectations about regulatory approaches in developed vs. developing regions.


Topics

Legal and regulatory | Human rights | Sociocultural


Overall assessment

Summary

The main areas of disagreement center on the appropriate balance between government regulation and multi-stakeholder approaches, the implementation of duty of care frameworks, and the level of trust in government institutions to fairly enforce platform governance.


Disagreement level

Moderate disagreement with significant implications. While speakers agree on the need for platform accountability, their different political and regulatory contexts lead to fundamentally different approaches. This suggests that duty of care implementation will likely vary significantly across jurisdictions based on local trust in institutions and democratic governance quality.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for differentiated approaches to platform regulation that distinguish between different types of providers and focus on platform accountability rather than user criminalization

Speakers

– Bia Barbosa
– Yvonne Chua

Arguments

Typology of application providers based on level of interference with content circulation is needed for proportional liability


Platform regulation should focus on disclosure requirements, data access, and comprehensive legal framework rather than just user punishment


Topics

Legal and regulatory | Human rights


Both speakers support systems that involve civil society organizations in content moderation processes, though through different mechanisms – formal certification in the EU versus collaborative approaches in Southeast Asia

Speakers

– Amelie Heldt
– Janjira Sombatpoonsiri

Arguments

Trusted flaggers system under DSA Article 22 allows certified organizations to flag illegal content for speedy platform review


Pluralist approach including participatory fact-checking, pre-bunking efforts, and community moderation creates democratic legitimacy


Topics

Legal and regulatory | Sociocultural | Human rights


Both recognize that platforms have the technical capacity to implement changes but current regulations fail to address fundamental business model problems that drive harmful behaviors

Speakers

– Yvonne Chua
– Audience

Arguments

Platforms have demonstrated capacity to respond to crises and adjust when regulatory expectations are clear


Current regulations don’t adequately address core business model issues like attention-grabbing features and addictive design


Topics

Legal and regulatory | Economic | Sociocultural


Takeaways

Key takeaways

Duty of care approaches to platform governance are being implemented differently across jurisdictions, with some focusing on intermediary liability (Brazil’s Supreme Court) while others emphasize systemic risk management (EU DSA, UK approach)


Multi-stakeholder governance models are preferred over purely regulatory solutions due to widespread public distrust in government frameworks, particularly in Southeast Asia where over 65% of experts distrust government regulation


The shift from self-regulation to regulatory models represents a fundamental change in platform governance, moving from content-focused approaches to systemic risk and process-oriented regulation


Trusted flagger systems and participatory fact-checking initiatives show promise as hybrid governance mechanisms that involve civil society in content moderation processes


Legislative delays in multiple jurisdictions have created regulatory voids that courts are filling with unclear frameworks, potentially leading to unintended consequences like strict liability or private censorship


Platform capacity for compliance exists when regulatory expectations are clear, as demonstrated during elections and emergencies, but implementation requires careful balance between safety and freedom of expression


Resolutions and action items

Brazilian Internet Steering Committee launched public consultation on principles for social media regulation with three key areas: protection of freedom of expression, information integrity, and harm prevention


Brazilian Internet Steering Committee prepared technical note proposing typology for application providers to assist Supreme Court in modulating accountability based on functionality and level of content interference


Philippines Congressional Tri-Committee issued 11 recommendations addressing platform regulation, governance oversight, and literacy enforcement as comprehensive framework


Continued research collaboration between UK and Brazil through British Academy-funded project comparing duty of care experiences across jurisdictions


Unresolved issues

Lack of clear definitions for ‘systemic risk’ across jurisdictions creates uncertainty and potential for abuse in implementation


Uncertainty about legal status of private actors serving as trusted flaggers – whether they constitute state actors bound by freedom of expression requirements


How to prevent weaponization of duty of care frameworks by governments for political retaliation, particularly regarding franchising requirements for platforms


Inadequate regulatory frameworks to address core business model issues like addictive design features, endless scrolling, and attention-grabbing mechanisms


Geopolitical tensions around DSA enforcement under Trump administration and pressure to align regulations with US positions


How to maintain democratic legitimacy and due process protections in increasingly polarized political environments


Fragmented approaches to defining what constitutes ‘risk’ in online spaces, particularly for political content across different countries


Suggested compromises

Typology-based approach for platform regulation that differentiates between providers based on their level of interference with content circulation (simple transport/storage vs. algorithmic recommendation systems)


Multi-stakeholder regulatory bodies that include civil society representation to prevent government abuse while maintaining oversight capabilities


Hybrid governance models combining legal frameworks with private sector best practices, such as trusted flagger systems with government certification processes


Focus on procedural requirements (transparency, appeals processes, data access) rather than content-specific mandates to balance safety with freedom of expression


Pluralist approach combining regulatory frameworks with participatory fact-checking, community moderation, and civil society initiatives to create democratic legitimacy


Due process safeguards including explicit requirements for platforms to understand their obligations before liability findings, as seen in DSA Article 73.2


Thought provoking comments

The majority of respondents, around over 65 percent, expressed distrust in government regulatory frameworks… Low trust in government’s genuine commitment to protecting citizens from online harms underpins public skepticism towards state-led regulation of platforms.

Speaker

Janjira Sombatpoonsiri


Reason

This comment introduced crucial empirical evidence that challenged the assumption that government regulation is universally welcomed as a solution to platform governance issues. It revealed a fundamental tension between the need for regulation and public trust in regulators, particularly in contexts with histories of authoritarian overreach.


Impact

This insight reframed the entire discussion by highlighting that the effectiveness of duty of care frameworks depends not just on their design, but on the political context and public trust. It led other speakers to address similar trust deficits in their jurisdictions and influenced the conversation toward multi-stakeholder approaches as alternatives to purely state-led regulation.


I like to call this approach a pluralist approach, which is slightly divergent from a multi-stakeholder approach… it’s about creating democratic legitimacy for such frameworks for society to adopt, accept, and pursue the goal of combating disinformation together.

Speaker

Janjira Sombatpoonsiri


Reason

This comment was intellectually provocative because it distinguished between multi-stakeholder approaches (which are widely discussed in internet governance) and a ‘pluralist approach,’ introducing a more nuanced framework that emphasizes democratic legitimacy and broad-based societal participation rather than just including different stakeholder categories.


Impact

This conceptual distinction elevated the theoretical sophistication of the discussion and influenced later speakers, particularly Bia Barbosa, to emphasize multi-stakeholder deliberative councils as essential safeguards against regulatory abuse. It shifted the conversation from technical implementation details to fundamental questions of democratic governance.


The presence of these two different concepts in different articles of the text… raised concerns that the proposed duty of care would result in practice in a change of the civil liability regime of platforms presupposing a duty of immediately removing such content in an automated manner.

Speaker

Bia Barbosa


Reason

This comment was insightful because it identified a critical gap between theoretical policy intentions and practical implementation outcomes. It highlighted how the same concept (duty of care) can be interpreted and implemented in fundamentally different ways within the same jurisdiction, leading to unintended consequences.


Impact

This observation introduced a crucial analytical framework that distinguished between systemic risk approaches and individual content liability approaches. It influenced the moderator’s summary comments about how ‘the same concept of duty of care has been mobilized differently, not only across jurisdictions, but even within the same jurisdictions, by different kind of stakeholders.’


We’ve seen this before, when ABS-CBN, once our country’s largest radio and television network, lost its congressional franchise in 2020 because the allies of then-President Duterte refused to renew its franchise. It wasn’t about media standards. It was political retaliation.

Speaker

Yvonne Chua


Reason

This concrete historical example was particularly powerful because it demonstrated how seemingly neutral regulatory mechanisms (franchising requirements) can become tools of political retaliation. It provided empirical evidence for abstract concerns about regulatory capture and abuse.


Impact

This example significantly influenced the Q&A session, with Baron Soka directly referencing it in his question about due process safeguards, stating ‘when it comes to franchising, there are no due process safeguards that could ever make franchising or licensing safe.’ It shifted the discussion toward concrete examples of regulatory abuse and the practical limitations of procedural safeguards.


There’s also the question how to deal with organizations that are mainly focused on a private interest, such as intellectual property, that’s often in a context that’s commercially used… it makes the whole context more difficult, and it needs to be assessed carefully how to jungle with these new instruments.

Speaker

Amelie Heldt


Reason

This comment was thought-provoking because it identified an underexplored tension within trusted flagger systems – the risk that commercial interests might capture supposedly neutral content moderation processes. It highlighted the complexity of hybrid governance arrangements where private actors take on quasi-governmental roles.


Impact

This observation influenced Ivar Hartmann’s synthesis, where he emphasized the challenge of determining ‘if private entities become trusted flaggers, are they state authorities now?’ It contributed to a more sophisticated understanding of the blurred boundaries between public and private authority in platform governance.


I have a feeling that some issues with the platform governance that touched or caused by the core business model of these platforms, like the attention-grabbing platform… I didn’t see their current regulation touch much on these kind of core issues.

Speaker

Yuzhe (audience member)


Reason

This question was particularly insightful because it challenged the entire premise of the discussion by suggesting that duty of care approaches might be addressing symptoms rather than root causes. It highlighted the potential inadequacy of content-focused regulation when the fundamental business model incentivizes harmful design.


Impact

This question prompted Amelie to acknowledge a significant limitation: ‘there is actually no rule that really goes at the core of the business model of social media platforms.’ It revealed a fundamental gap in current regulatory approaches and suggested that duty of care frameworks, while important, may be insufficient without addressing underlying economic incentives.


Overall assessment

These key comments fundamentally shaped the discussion by introducing three critical analytical frameworks: (1) the importance of political context and trust in determining regulatory effectiveness, (2) the distinction between theoretical policy intentions and practical implementation outcomes, and (3) the tension between addressing symptoms versus root causes in platform governance. The comments collectively moved the conversation from a technical discussion of regulatory mechanisms to a more sophisticated analysis of power dynamics, democratic legitimacy, and systemic limitations. They revealed that duty of care approaches, while promising, face significant challenges related to political capture, definitional ambiguity, and the fundamental misalignment between platform business models and public interest goals. The discussion evolved from presenting different national approaches to identifying common underlying tensions that transcend jurisdictional boundaries.


Follow-up questions

How are the different legal frameworks defining systemic risks and how do you approach trying to identify risks that are not illegal content but somehow would not wind up restricting access to information and freedom of expression?

Speaker

David Sullivan from the Digital Trust and Safety Partnership


Explanation

This is crucial for understanding how different jurisdictions operationalize duty of care without overreaching into legitimate speech


When should we have confidence in due process protections and what kinds of due process protections will actually work in the face of increasingly lawless governments?

Speaker

Baron Soka from Tech Freedom


Explanation

This addresses the fundamental challenge of creating safeguards that can withstand political abuse and authoritarian tendencies


What are the thoughts on Article 73.2 in the Digital Services Act as a due process safeguard, which requires the commission to explicitly say what the platform should have done before issuing a finding of liability?

Speaker

Baron Soka from Tech Freedom


Explanation

This explores specific mechanisms within existing legislation that could serve as models for due process protection


What kinds of safeguards would be adequate to ensure that duties of care can’t be used against particular kinds of content, such as transgender content?

Speaker

Baron Soka from Tech Freedom


Explanation

This addresses the risk of duty of care being weaponized against marginalized communities and specific types of content


What understanding can research provide about the duty of care of states when implementing platform governance laws to prevent abuse that would be detrimental to citizens and democratic settings?

Speaker

Brianna (audience member)


Explanation

This shifts focus from platform obligations to state responsibilities and the need for safeguards in how governments implement and enforce these laws


What is the direction for resolving issues caused by core business models of platforms, such as attention-grabbing features like endless scrolling that exploit dopamine systems and cause addiction?

Speaker

Yuzhe (former platform legal counsel)


Explanation

This addresses whether current regulations adequately tackle the fundamental design features that create harm, beyond just content moderation


How does a general duty of care on platforms provide promise to cover new technological developments without having to explicitly mention them in legislation?

Speaker

Andrew Camping from Internet Watch Foundation


Explanation

This explores whether duty of care frameworks can be future-proof and adaptable to emerging technologies like AI


Whether jurisdictions feel pressure to align their social media regulation with the US administration’s approach, given that DSA enforcement has become geopolitically contentious under the Trump administration?

Speaker

Audience member (unnamed)


Explanation

This examines how geopolitical tensions and US policy changes might influence regulatory approaches in other countries


How to evaluate the role of private actors (trusted flaggers) and whether they should be considered state actors when they take on content moderation responsibilities?

Speaker

Amelie Heldt


Explanation

This addresses the complex legal relationships in hybrid governance models and the implications for freedom of expression


How to ensure regulatory frameworks have democratic legitimacy and broad societal acceptance through multi-stakeholder approaches?

Speaker

Janjira Sombatpoonsiri


Explanation

This explores how to build public trust and legitimacy in platform governance frameworks, especially in contexts where government trust is low


How will the Brazilian Supreme Court ensure implementation of its decision on intermediary liability, considering Brazil lacks a regulatory authority for this topic?

Speaker

Bia Barbosa


Explanation

This addresses the practical challenges of enforcement when judicial decisions create new obligations without corresponding regulatory infrastructure


How to create ironclad due process protections for platform franchising requirements, or whether franchising should be abandoned entirely?

Speaker

Yvonne Chua


Explanation

This examines whether legislative franchising of platforms can ever be made safe from political abuse, given historical examples of retaliation


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.