WS #164 Strengthening content moderation through expert input

17 Dec 2024 12:30h - 13:30h

WS #164 Strengthening content moderation through expert input

Session at a Glance

Summary

This discussion focused on how social media platforms can engage with external stakeholders, particularly academics and human rights experts, to improve content moderation policies. The panel featured representatives from Meta, academia, and human rights organizations.

Jeffrey Howard argued that academics should take an “educative” rather than “activist” approach when advising platforms, providing frameworks and insights rather than pushing specific policy positions. Conor Sanchez from Meta described their process for consulting experts on policy development, giving examples of how external input shaped policies on crisis situations, functional identification, and human smuggling content.

Tomiwa Ilori emphasized the importance of meaningful collaboration with local experts and institutions, centering victims’ experiences, and adopting a bottom-up governance approach. He also stressed the need for platforms to be transparent about how they apply expert input and to proactively address human rights concerns.

Participants discussed challenges around language capacity and cultural competence in content moderation. Meta representatives highlighted their investments in multilingual moderation and partnerships with local NGOs, while acknowledging ongoing difficulties.

The discussion underscored the complexity of content moderation decisions, with stakeholders often disagreeing on optimal policies. Speakers emphasized the importance of sustained engagement with diverse experts, iterative policy development, and balancing different perspectives. Overall, the panel illustrated the intricate process of incorporating external expertise into platform governance while retaining ultimate decision-making responsibility.

Keypoints

Major discussion points:

– The role of academics in engaging with social media platforms on content moderation policies

– How Meta conducts consultations with external stakeholders and experts to inform policy decisions

– Ways platforms can learn from human rights experts to ensure rights-centered content moderation

– Challenges around language, cultural context, and capacity in global content moderation efforts

– The process of making difficult policy decisions based on stakeholder input

Overall purpose:

The goal of this discussion was to explore how social media platforms like Meta engage with external experts, particularly academics and human rights specialists, to develop content moderation policies that are effective, ethical, and rights-respecting on a global scale.

Tone:

The tone was largely informative and collaborative, with speakers sharing insights from their experiences working on these issues. There was an emphasis on the complexity of the challenges and the need for ongoing dialogue and iterative processes. The tone remained consistent throughout, maintaining a constructive and solution-oriented approach to discussing difficult policy questions.

Speakers

– Tomiwa Ilori: Advisor for BD Tech Africa Project by the UN Human Rights

– Conor Sanchez: Content policy team and stakeholder engagement team at Meta

– Jeffrey Howard: Academic researcher, ethicist

– Ghandi Emilar: Moderator

Additional speakers:

– Naomi Schiffman: From the Oversight Board (mentioned but not present)

–  Mike Walton: UNHCR

– Adnan: Participant from Iraq

Full session report

Expanded Summary of Discussion on Social Media Platform Engagement with External Stakeholders

This discussion explored how social media platforms, particularly Meta, engage with external stakeholders such as academics and human rights experts to improve content moderation policies. The panel featured representatives from Meta, academia, and human rights organisations, focusing on the complexities of developing effective, ethical, and rights-respecting content moderation policies on a global scale.

Role of Academics in Platform Policy Development

Jeffrey Howard, an academic researcher and ethicist, argued for an “educative” rather than “activist” approach for academics engaging with social media platforms. He posited that academics should provide frameworks and insights rather than pushing specific policy positions, preserving their distinctive role and differentiating their input from that of other stakeholders.

Howard contended that academics should not view themselves as voting stakeholders in platform decisions, as they are not directly affected by policies in the same way as other constituents. Instead, he suggested that academic engagement with platforms can be intellectually generative, offering unique perspectives and analytical frameworks to inform policy development.

Meta’s Stakeholder Engagement Process and Content Moderation Efforts

Conor Sanchez, representing Meta’s content policy and stakeholder engagement teams, detailed the company’s process for consulting experts on policy development. Meta’s approach is based on principles of inclusivity, expertise, and transparency. Sanchez provided examples of how external input has shaped policies on crisis situations, functional identification, and human smuggling content.

Sanchez highlighted Meta’s significant investment in safety and security, with over 40,000 people working on these issues. The company’s content moderation efforts cover more than 70 languages, demonstrating the scale and complexity of their operations. Sanchez also mentioned Meta’s Trusted Partner Program, which facilitates collaboration with local NGOs and experts.

The human smuggling content policy was discussed as a prime example of the complexity in policy-making. Sanchez explained how Meta had to balance humanitarian concerns with the need to prevent exploitation, resulting in a nuanced policy that allows certain types of content while prohibiting others.

Human Rights-Centred Approach to Content Moderation

Tomiwa Ilori, an advisor for the BD Tech Africa Project by the UN Human Rights, emphasised the importance of a human rights-centred approach to content moderation. He advocated for meaningful collaboration with credible human rights institutions, particularly those with local expertise and “boots on the ground” in specific contexts. Ilori also highlighted the relevance of the UN guiding principles on business and human rights in this context.

Ilori stressed the need to centre victims’ experiences and adopt a bottom-up governance approach. He argued for increased access to platform data for independent research, especially in underserved contexts such as the majority world. This approach, Ilori contended, would provide more contextual nuances and understanding of issues on the ground, informing how platforms can learn from their impact in diverse settings.

Challenges in Content Moderation

The discussion highlighted several significant challenges in global content moderation efforts

1. Language and Cultural Differences: Sanchez acknowledged the difficulties in moderating content across diverse linguistic and cultural contexts. Despite Meta’s investment in multilingual moderation, challenges persist, as evidenced by an audience question from Iraq about the lack of Arabic-speaking content moderators.

2. Capacity and Resource Constraints: Mike Walton from the UN Refugee Agency raised concerns about the capacity to support content moderation across a wide breadth of languages.

3. Accessibility for Local Stakeholders: An audience member from Iraq, Adnan, pointed out the difficulty some local researchers and NGOs face in reaching Meta experts to engage on issues or ask questions.

4. Sustaining Engagement: Gandhi Emilar, the moderator, highlighted the challenge of sustaining ongoing engagement with academics and experts over time. The importance of relationship-building and long-term collaboration was emphasised as crucial for effective policy development.

Areas of Agreement and Disagreement

The speakers largely agreed on the importance of diverse stakeholder engagement and the need for sustained, ongoing collaboration between platforms and experts. There was also consensus on the significant challenges faced in content moderation across different languages and cultures.

However, some differences emerged in approaches to stakeholder engagement. While Jeffrey Howard advocated for an educative role for academics, Tomiwa Ilori emphasised the importance of including victims’ voices and lived experiences in policy development. Similarly, while Conor Sanchez focused on Meta’s existing process of consulting various experts, Ilori advocated for a more bottom-up approach emphasising local context and actors.

Conclusion and Future Directions

The discussion underscored the complexity of incorporating external expertise into platform governance while retaining ultimate decision-making responsibility. It highlighted the need for ongoing dialogue and iterative processes in developing content moderation policies. Sanchez emphasised that policy-making is an ongoing process, with continuous refinement based on new information and stakeholder input.

Several unresolved issues and potential areas for further exploration emerged, including:

1. Effectively scaling bottom-up content governance approaches

2. Balancing conflicting stakeholder perspectives in policy decisions

3. Improving accessibility for local stakeholders to engage with platforms

4. Sustaining long-term engagement with academics and experts

5. Increasing platform capacity to support content moderation across diverse languages

6. Exploring the potential role of AI in multilingual content moderation

7. Establishing best practices for imported content moderation labels

8. Finding ways for platforms to support objective institutional research without compromising independence or credibility

These points suggest a rich agenda for future discussions and research on the intersection of social media governance, content moderation, and human rights.

Session Transcript

Ghandi Emilar: panelists online, three speakers online, if you can just quickly introduce yourselves.

Tomiwa Ilori: Okay. Hi, my name is Tomiwa Ilari. I’m currently an advisor for BD Tech Africa Project by the UN Human Rights. Thank you for having me.

Ghandi Emilar: Tomiwa, when you speak next, please increase your volume. Can we go to Kana, please?

Connor Sanchez: Yes, thank you. Hi, everybody. My name is Connor Sanchez. And I am with Metta. I’m on the content policy team and specifically on the stakeholder engagement team. Pleased to be with you all today. I’m sitting in California.

Ghandi Emilar: Thank you so much, Kana. And we have our third speaker online. Maybe when she joins us, we can ask her to introduce herself. So quickly just going through some of the challenges that we face as we do external stakeholder engagement at Metta. The first one, which I think most of you as well in your work first is really identifying the experts. Who are these experts? How do we even define what expertise is? Can we look at lived experiences? Can we look at the impacted ones? Who are the potentially impacted? Who are the vulnerable? Who are the underrepresented groups? So identifying experts in itself is a challenge. The second one is really when we identify the experts, what are some of the, you know, how do we manage conflicting interests? And how do we manage the conflict in the work that we’re doing? And what are some of the, you know, how do we manage conflicting interests within the stakeholder maps that we have? What are the agendas that they have that can influence input and objectivity on our policies, on our product policies, on our content policies? But beyond that, beyond just identifying the experts, really as I think, acknowledging that with experts, there’s a spectrum of experts, it’s not just one type of experts, it’s not just academics, or civil society groups that I’m seeing in the room, and also, also online. The third one is the different or, you know, the power dynamics, not all NGOs are the same, not all stakeholders are the same. You know, different stakeholders have different, you know, levels of influence within the stakeholder groups themselves and between the different stakeholder groups. How do we also, how do we also communicate complex information? I’m happy to see, you know, Yopun from Diplo, as a former Diplo ambassador, I know you don’t have your headphones. It’s important, I think we have benefited, personally I’ve benefited from the capacity building programs that the organization has run. And for us as META, it’s important, I think, to acknowledge that the stakeholders that we are engaging, not all of them, they might have lived experiences, they might be experts in their fields, but not everyone understand our policies. So we have to really work hard and ensure that before we communicate that complex information or before we communicate any of the policy changes, we engage in capacity building. Opportunities, I think there are many, and this panel will look at some of those opportunities. You know, access to specialized knowledge, we don’t want this to just be an extractive process, we want it to be mutually beneficial, not only to us, but also to the experts that we are speaking to. It improves our policies, that goes without saying. It improves our policies, not only the substance, but the process itself and the credibility of the work that we are doing. Transparency, I think is also another opportunity. It sounds like a very easy concept, but obviously not, because with transparency comes accountability, and that’s something I think that we need to talk about. And also building trust. We know that there’s a trust deficit between us and stakeholders. Do we need intermediaries to help us build that trust, or is this something that we can work on? And we know that building trust is not a sprint, but a marathon that we need to ensure that we are in for the long haul. I will just end here, but I think opportunities, we can also talk a lot more about what we can gain or what we can get from the process itself. I think moving over, should I start with you? What are some of the issues? Your experiences, I think, working with META in terms of stakeholder engagement, and then we can go into specific questions.

Jeffrey Howard: That sounds great. So I’ve been given a brief to speak for about eight to 10 minutes about my experience. And I’m going to be thinking in particular about the role of academics in this process. So consider just some of the questions that bedevil policymakers at platforms like META. Should platforms restrict veiled threats of violence or only explicit threats of violence? Should rules against inciting or praising violence be modified to include a carve-out for speech that advocates or praises justified self-defense? When should graphic violence depicting real-world, graphic content rather, depicting real-world violence be permitted for awareness-raising purposes? When should otherwise violating content be permitted on grounds of newsworthiness? For example, because the speaker is an important politician. What kinds of violations should result in permanent bans from a platform? And what kind should result in temporary suspensions? How can platforms better monitor and mitigate suspicious conduct by users in order to prevent abusive behavior before it happens? So these are just some of the topics about which I’ve engaged with social media platforms over the years in my work as an academic researcher. I’ve engaged principally with various teams within META, also teams within the Oversight Board, and policymakers throughout the UK and EU. And the thing about the questions I just listed that I want to call your attention to is that they’re not empirical questions that can be answered with social science. They’re normative questions about how to strike the balance between different ethical values when they come into conflict. Now, the academic discipline of ethics is entirely dedicated to exactly that issue. And so my role as an ethicist is to bring the tools of ethics conceived widely to bear on the proper governance of online speech and behavior. And what I want to do in the next couple minutes is to sketch two alternative theories of the proper role of academics in undertaking this kind of work, tracing some of their implications for how we should engage with platforms. So the first conception I’ll discuss is what I’ll call the activist conception, and I think this is really common. Now, on this view, the academic has already made up his or her mind about what the right answer is on a particular issue and sees her role as that of pressuring or persuading or lobbying the platform to adopt her view. So consider that question I mentioned about whether there should be a self-defense carve-out to the policy prohibiting advocacy of violence. So on this approach, the academics already made up their mind about whether it’s yes or no, and the goal is simply to persuade platforms to go their way. Usually, academics who follow this approach have already written an academic paper publishing exactly the view that they hope to defend, and then they want to be able to show that that paper has had impact for professional incentive reasons. So they’re really activists for their own research. Now, I think this is a really common way of academics to engage, and I think it’s completely misguided. I think it’s the wrong way for academics to engage. I think we should reject the activist conception of the role of academics and stakeholder engagement, and I think we should reject it because it diminishes the distinctive role that I think academics can play in this process because it eliminates the distinction between the role academics can play and the role that other stakeholders can play. Now, if you work for an NGO dedicated to fighting violence against women and girls or an organization dedicated to children’s mental health, I think the activist conception makes complete sense. You’ve figured out what policy best serves the needs of those you represent, and you’re going to the wall for those people to advocate for that policy. And so the activist view flows from these organizations’ purpose. But I’d argue that the distinctive role of an academic isn’t to be an activist. It’s something else, and that leads me to the second view, which is the one I’ll defend, and for lack of a better term, I’ll call it the educative view. And the idea here is that the role of the academic is to just educate the audience about the relevant cutting-edge academic research that bears on whatever the topic is under discussion. And in this way, it draws on the way academics ideally already teach their classes, which is to inform students about the range of research pertinent to a particular topic. So when I teach a class in London on the ethics of counter-terrorism policy or the ethics of crime and punishment, I’m not just teaching my own preferred views in those various controversies, I teach the most reasonable arguments on each side of an issue so that students are empowered to make up their own minds. Likewise, for my colleagues in empirical political science, when they’re teaching, for example, the causes of political polarization, the professor doesn’t just teach students his own favorite explanation that he’s published on in a recent article in the American Political Science Review. The right way to teach a class on that topic would be to identify the range of potential causes in the academic literature, pointing out the evidence for and against. Now, he might also flag that he favors a particular view, but his goal isn’t to ram his preferred theory into students’ brains, it’s to empower them with frameworks and insights so that they can make up their own minds. And my thought for you today is that that educative conception should guide academics in how they engage with platforms and other decision makers. Our role isn’t just to tell platforms what we think the right answer is as we see it, as if platforms were counting votes among stakeholders. And by the way, even if platforms were counting votes among stakeholders, it’s not clear academics should get a vote since we’re not really stakeholders, we’re not particularly affected by policies in the way particular constituents are. Our input is solicited because we have knowledge that’s relevant to their decision. Our role is to give platforms insights and frameworks so that they can make up their own minds. So let me make that just a little more concrete for you before I finish. So when I first engaged with Meta on the topic of violent threats and whether veiled threats should be restricted, I saw my role as getting them up to date with philosophical theories about what threats are, about how they function, about what harm they can cause, why speakers might have a moral duty to refrain from threatening language. What legitimate role sarcastic or hyperbolic threats might play in valuable self-expression. I also saw my role as informing them about theories from legal philosophy about what to do in tricky cases where all the candidate rules in a given policy area are either under-inclusive or over-inclusive, which I think happens quite a lot in the content moderation space. Likewise, when my team presents public comments to the oversight board, we of course indicate what result we think the oversight board should reach, but that’s much less important than the framework of arguments we offer to reach that conclusion. So for example, one central critique of deploying international human rights norms for content moderation is that these norms fail to offer adequate guidance, they’re just too indeterminate. But those who make this critique in the literature almost always overlook the fact that there’s a huge amount of cutting edge philosophical work on principles like necessity and proportionality, which I think can be really, really helpful in giving guidance to content moderation decision makers. And so part of my role is to help decision makers within platforms learn about that work. Now, wrapping up now, I’d like to emphasize that the case for the educative model is bolstered by the obvious fact that experts disagree about what to do. And so academics simply cheerleading for one side of the argument is not particularly helpful. The role of academics is to supply platforms with the insights they need to exercise platform’s own judgment about what to do. And I think judgment on ethical questions is essential. If I were to tell you that I was opposed to the death penalty and you asked me why, and I said, well, I asked some ethics professors and they told me they were opposed for it and I believed them, that would be an intellectually and morally unserious set of reasons for having that view. We all are responsible for making our own judgment about what’s right and wrong. And while ethicists can help us think through the arguments, the judgment about which argument is most convincing must ultimately be ours. And that goes for a platform too. Platforms like Meta can consult experts, but ultimately it’s their responsibility. to make a judgment about what to do. Last comment I’ll make is just many academics are reluctant to engage with decision makers in this space. And I think that’s a huge mistake because engaging with platforms and other decision makers like Oversight Board is hugely intellectually generative. It can help us identify new topics to write and think about. And it can also give us an opportunity to make a positive practical difference through our work. So that’s how I see the role of academics in engaging with platforms. Thanks.

Ghandi Emilar: Thank you so much, Jeff. This is really, really useful. I think one of your posts that I took here is intellectually and morally unserious views. I think I’ll use it moving forward. But you really put forward, I think, a compelling argument on why academics should engage in these spaces. And I’m sure there’s a lot of people have questions for you. But if we can just move on to other speakers and we get back to you. Yeah. Now I want to move on to Connor who leads our external engagement and who is the brains behind, with Jeff as well, behind this workshop. For him to take us through some of the case studies that show how our engagements have impacted policy decisions, our engagements with academics. So Connor, over to you. Could we put Connor’s screen? Oh, there it is. Great, everyone can see it now. Super.

Connor Sanchez: Wonderful. Yes, thank you so much. Can you hear me okay?

Ghandi Emilar: Yes, we can hear you.

Connor Sanchez: Great. Thank you so much, Amalar. And thanks, Jeff, for those first set of comments and provocation for this discussion. For everybody joining again, my name is Connor Sanchez. I’m on the stakeholder engagement team here at Meta. And I’m going to build off of Jeff’s remarks just to briefly share a bit about how we carry out consultations with external stakeholders, including academia, as well as independent researchers. We engage these experts for a variety of reasons and on a wide variety of topics. So I think this will give you a taste for how that process runs and how we take those consultations and the insights they share into account as we work through a particular policy. So just backing up for a second, just for those who may be unaware, our content policy team is the team that’s in charge of our community standards. The community standards, at the simplest level, are rules to make our platforms a space where people feel empowered, where they feel safe to communicate. And importantly, these standards are based on feedback, feedback we’ve received from a wide variety of individuals who use our platform, but also the advice of experts. And I have a few case studies that I think kind of exhibit exactly how these consultations have had an impact on our policy. An important detail about our community standards, these are global, they apply to everyone around the world, and we’ve become increasingly transparent about where we draw the line on particular issues. And laying out these policies in detail allows us to have a more productive dialogue with stakeholders on how and where our policies can improve. As MLR mentioned, we do a lot of capacity building. We realize that not everybody is extremely savvy about exactly how our rules work or how our enforcement works, so we also do a lot of… education to make sure that people understand kind of where the status quo is and why we’ve drawn the line in certain areas, even as we seek their feedback on improving and evolving our community standards. So as you can see, this is a long list of what can be found in our Transparency Center. It covers quite a bit. This contains everything from hate speech to violent and graphic content to adult nudity and bullying on our platforms. The consequences for violating our community standards vary depending on the severity of the violation and the person’s history on the platform. So if you violate any one of these rules, that receives different enforcement mechanisms, and that in and of itself is something that we seek feedback on. What is the proportional response to somebody who violates our rule? What happens if it’s violated twice, three times, or seven times? At what point does that person—we want people to learn about our rules. We want them to get better and come back and be a responsible community member. And so at what stage is that appropriate for our enforcement mechanisms? And just to give you a sense of how we involve our experts into our policy development process, we really bring them into a very robust process of how we’re developing a policy. We create an outreach strategy to make sure that we are including a wide range of stakeholders, and then we carry out that outreach. Ultimately, as Jeff said, the decision sits with us. We take all of that, everything that we’ve heard from our consultations, and we provide that to our internal teams, to leadership, and we make a policy recommendation at what’s called our policy forum. This is sort of the preeminent space within the company where we consider some of the biggest questions that are plaguing our community standards and make a decision on the direction that we wanna go in. In terms of who we engage, this is the question I think I get the most, is how do you decide who to engage with? How do you find relevant experts? How do we make sure that some vulnerable groups or groups that haven’t been heard are being heard in the process? There’s no simple formula for doing this or how we would respond to this, but we have developed a structure and a methodology that helps guide us as we reach out externally. So in terms of who we engage with, first, we can’t meaningfully engage with billions of people, though that is certainly our stakeholder base. It includes billions of people. So we seek out organizations that represent the interests of others. We also really look for expertise in particular fields, and these don’t have to be experts in content moderation or content enforcement or even internet governance or platform governance, but really they could be experts in irregular migration, in psychology. All of those things can really be informative for our policy. And then in terms of the categories of stakeholders, we’re looking at NGOs, we’re looking at academic researchers, human rights experts. They can also be people with lived experiences who are on our platforms, using our tools in certain ways. And in terms of guiding who we engage, we really have sort of. of three principles or values that we look for, inclusivity, expertise, and transparency, and making sure that we know that we’re building that trust with the stakeholder base as we speak with them. So jumping into a few examples of how this has actually played a part in our policy development process. In 2022, we published our, what’s called our crisis policy protocol. And what this did was codify our content policy responses to crisis situations. The framework we aim to build would assess crisis situations that may require a specific policy response. And so we explored how to strengthen our existing procedures and include new components such as a certain criteria for entry and exit into a crisis designation. So as we developed this, we sought consultations with global experts who had backgrounds in things like national security and international relations, in humanitarian response, conflict and atrocity prevention, human rights experts. And in these consultations, stakeholders and the experts that we spoke to really helped surface key factors that should determine, that would be used to determine whether a crisis threshold has been met. And so this included, if there were certain political events or there were large demonstrations in the street, certain states of exception or policies that were put into place. All of these things were based on the experience and the expertise from the experts that we consulted and really informed the criteria that we continue to use. this day. Another example is our functional identification process. So this policy focused on how we treat content that could identify individuals beyond explicit factors such as a person’s name or an image, which obviously we already have, we already had policies for if somebody’s name was shared in a certain context or the image and that posed a risk to them, then we would remove that content. But functional identification were more subtle factors that were information that was being shared about an individual without naming them, but that could still result in their being identified and they could put at risk as a result of that identification. So the expertise that we sought with this policy development included privacy and data security experts, journalists who are often publishing the names of individuals in their stories, who sometimes may need to remain anonymous. And so from there, we’re really drawing on decades, if not centuries of experience of individuals who have grappled with this question before of what details to provide in a publication that will be read by many, many people and therefore the types of guidelines that they need to be putting in place to protect those identities. We also spoke with a global women’s safety expert advisor group that we manage. This includes various non-profit leaders, activists, and academic experts who could focus on the safety of women on and offline. And so this stakeholder input Really, the engagements helped our team develop a policy that upon escalation allows us to consider additional factors beyond just name and image, including if somebody’s age and their ethnicity or their distinctive clothing, if all three of those in combination are published online, and we have signal from a local NGO that says that this could put somebody at risk, then that would allow us to remove that content based on our new policy. And a last example of how expert input played a role in our policies. So in 2022, we developed a policy on how to treat content soliciting human smuggling services. So our policies at that time, this was under our human exploitation policies, distinguished human smuggling from human trafficking, recognizing that human smuggling was a crime against the state and human trafficking as a crime against a person. What we wanted to tease out with experts was really figuring out what are the risks posed to people who solicit this type of content online? What are the risks associated of leaving this content up? And what are the risks associated with removing this content? And so we heard a wide variety of different insights from the stakeholders that we spoke with. The experts that we spoke with included experts who work at international organizations that are focused on migration, refugee protection, and organized crime. It also included academics who focus on irregular migration, human smuggling, refugee and asylee rights. criminologists. We also spoke with former border enforcement officials, people who have worked at borders around the world, and we really drew on this expertise to figure out where we should draw the line on this policy. They highlighted the risks posed to individuals, especially vulnerable individuals who solicit this content. They also highlighted the risk that if we were to remove this, what this would mean for somebody who may be in a very vulnerable position where they are escaping conflict, oppression, or otherwise unsafe conditions in their countries of origin. And ultimately, this led us to adopt a policy that minimized the risk of removing these types of posts by providing a safety page with information on immigration. So we would remove the solicitation of human smuggling services, but we would also provide a safety page for that individual who may be requesting that. And in developing that safety page, we also consulted experts to determine what information would be most impactful for this vulnerable population. Great, thank you so much, and that concludes my remarks. I’ll pass it back to Anilaj.

AUDIENCE: Thank you very much, Emila. Can you hear me clearly? Can everyone hear me? Before I go on? Yes. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Before I go on? Yes. Okay. Okay. I can hear you. Thank you again. Quickly to my question. My understanding of the question is it can be subdivided into two broad areas. The first one is how can. The second one is how should platforms learn from human rights experts to ensure a rights-centered model for content moderation? Like the speakers before me have said, there is usually really no one-size-fits-all because, for example, in the context of Meta and other major platforms, they operate in very, very many contexts, including very complex contexts. Saying this has to be the solution is going to be very problematic. So my understanding that some of the things that I think platforms can learn, of course, also based on my interaction with platforms like Meta in the past, is number one, for example, is ensuring meaningful collaboration. And what do I mean by meaningful collaboration? This involves, for example, increasing collaboration with established and credible human rights institutions and organizations to identify human rights. This also involves, for example, the devolution of focus on Western institutions who, you know, in quote, pretend to work on content moderation issues, especially in contexts that they do not have expertise in. For example, it involves identifying and working directly with these institutions that have boots on the ground regarding content moderation in this context. This could help identify specific pinpoints for platforms and these actors, and collaborating with these institutions and organizations to think through possible solutions. And I think this also has been mentioned earlier by both Connor and Jeff. Number two is centering victims. What I mean by that is it should involve broadening the scope of human rights expertise to include victim-centered feedback on the impact platforms have, especially on vulnerable persons. When we think of experts, I think we often miss out on centering victims whose experience are usually the focus of most engagements. One key way of learning from these experts is also to focus on including the voices of these victims that are impacted by these activities, who may or may not be experts in content moderation and governance but have lived experiences. And a third one is adopting a bottom-up content governance approach. What I mean by this is working with key actors and experts in specific domestic contexts, such as national human rights institutions, civil society, and academics. This provides more contextual nuances and understanding of the issues on the ground, how these actors are currently thinking about them, and how exactly platforms can learn from their impact on the ground. A similar example was given earlier by Connor regarding the crisis policy protocol that sort of revealed certain factors to consider in determining what qualifies as crisis. A fourth way that platform can learn from MRIs experts is increasing more access to platform data for independent research, especially in underserved contexts such as the majority world. There’s an increasing need to understand how platforms shape critical aspects of human rights challenges today. But the tools in reaching such understanding, such as the raw platform data that could point to possible solutions to these challenges, are unavailable for analysis by most majority world researchers. Lastly, another way that platforms can learn is identifying with the trolls of existing resources and platforms out there. And this includes both technical and non-technical outputs developed by international organizations such as the UN, academic institutions, and civil society organizations. Not only this, all these resources are adapted for platform use should also be made transparent. For example, where certain resources are applied by platforms, it should be clear what was applied and why. And in cases where feedback is sought but not utilized, it should also be clear as to why. Now, the second part of the question, which I’m going to rush through quickly because of time, is what platforms should learn from human rights experts. Number one is practical application of human rights standards. And I know this is a very, very tiny and difficult area, especially for companies. But since human rights experts draw from human rights standards in the analysis of platform activities, it will be useful to look at the most proximate standard. And for example, in this context, the UN guiding principles on business and human rights would easily apply. And You know, the UNGPs, especially as related to its application to technology companies, provide useful ways for companies to ensure that their activities are rights-centered. For example, one of such ways that the UN Human Rights has done this is through the BTEC project, which focuses on the application of the UNGPs to digital technologies, and they have quite a lot of resources in this area. And the BTEC’s focus, you know, is, they have four strategic focus. Number one is addressing human rights risk in business models, human rights due diligence and end-use, accountability and remedy, and a smart mix of measures, which involves exploring regulatory and policy responses to human rights challenges linked to digital technologies. Another way platforms should learn from human rights experts include ensuring participatory development of content moderation rules and processes, and I was happy to listen to Connor earlier, because this is a very practical demonstration of what this participatory development refers to. Thirdly, is also proactive accountability. This helps to engender trust, and it involves operationalizing measures that make platforms accountable regarding human rights harms, even before victims or general public are aware of such harms. This includes, but is not limited to proactive human rights impact assessment of products and services to identify harms, communication of the access to which such harms impact human rights, and the steps taken to remedy such harms. Lastly, is platforms should learn from human rights experts how to agile and dynamic adaptivity, or what do I mean by that? What I mean is, platforms can also learn to be agile and adaptive when it comes to applying international human rights standards to emerging and cutting-edge content moderation challenges. For example, what should be the best standard practice that already been highlighted by human rights experts regarding imported content moderation label? Another example is in what ways can platforms fund objective institutional or support objective institutional research without impeding their independence of credibility? So, in my view, I think these are quite, of course, a brushed presentation, but this is more or less some of the ways that I think platforms can and should learn from human rights experts to ensure a rights-centered model for content moderation. Thank you very much, Amela.

Tomiwa Ilori: Thank you so much, Tomiwa, for that. And I think you raise an important point regarding, I think, platforms being very transparent about the input that they take into consideration and why, and not just communicate the outcome. I’m not sure if Naomi is online. I can’t see from here. Naomi, if you’re online, would you like to jump in? Naomi Schiffman is from the Oversight Board and if she’s online, she will discuss how the Oversight Board contributes to policy development and she will also highlight how she built the Academic and Research Partnerships Program at Found Temple. Is she online? No. Okay. I think if she’s not online, I think we can move into the discussion phase for this. We have a few more minutes. But before I ask questions, we’ve been talking for the last… you know, a few minutes. Are there any questions for our experts? Yeah. Please introduce yourself and, yeah.

Mike Alton: Hi everyone, it’s Mike Alton from the UN Refugee Agency and we’ve worked with a number of people on the call, so good to see you. It’s about capacity and I love the kind of approach from ground up, but I just wonder how much capacity there is both on MetaSite and any other content platforms in really putting that resource where it’s needed and the issue of language comes up again and again in terms of capacity to support maybe a wide breadth of languages unsupported now. So how can we take that bottom up approach, not just for policy development, but also for content moderation and make sure that we have a really strong infrastructure there? I know lots of people are putting AI at the heart of, maybe this can help us moderate content going forward and that might be one possibility, but the doubts are there. So yes, what can we do and is there enough capacity and if not, how can we increase that capacity?

Tomiwa Ilori: Thank you so much, Mike. That’s a great question. I think before we get back, any other question? Yes, please introduce yourself and thank you.

AUDIENCE: Thank you so much. This is Adnan from Iraq. Thank you so much for this interesting discussion and I actually learned from all of you. On last year, I participated in one of the Meta’s event on community standards in Amman. It was actually helpful. I had the same similar question actually because I’m from Iraq and I know that Iraq is a very diverse country and my question would be for Connor regarding other languages, how you guys are, because I know that those kind, the policies you mentioned, maybe most of them are. are in English. I don’t know whether they are available in different languages, so people can read about it. And also, the next question will be about the engagement of stakeholders at the local level. I know that, for example, Iraq, I feel like there is a lot of difficulty to reach to META when someone, a researcher or an NGO, wants to engage, to ask question or raise a question, raise an issue. It’s really difficult to get to the experts. Thank you so much.

Ghandi Emilar: Thank you so much. And so good to see someone who attended our community summit in here. Connor, I will, can you take on some of the, you know, parts of the question and I’m happy to jump in.

Connor Sanchez: Yeah, yeah, just really quick. Thanks for the questions. I think language is a huge, huge part of content moderation and our enforcement. It’s obviously something that we have, we’ve invested quite a bit over the last eight years or so. I think just overall, just zooming out, we’ve invested $20 billion in safety and security. And so our trust and safety team at the company is made up of about 40,000 different people who bring language expertise, but also bring, you know, expertise in certain policy violation areas, in certain areas such as safety and cybersecurity. Content moderation includes thousands of reviewers that moderate our content across our platform. So Facebook, Instagram threads in about 70 different languages. We also do fact checking. with our third party fact checkers for misinformation in about 60 different languages. And so, and then for our coordinated inauthentic behavior, which focuses on what many would consider foreign influence operations, these are looking at taking down networks of operations. And those have been done in about 42 different languages. So it’s something that we are continually wanting to get better at. And I think in addition to just the language differences, there are the cultural differences and the colloquial nuances that come with every language. And so something even like Spanish, you have certain terms and ways of speaking that differ from one part from Central America to South America. And for that, another part of our content moderation apparatus that’s helpful is our Trusted Partner Program. Our Trusted Partner Program is a network of hundreds of NGOs around the world that we manage that really provide that local context, that local insight when there is maybe a particular trend or a term that we may not be, that may be only used in that jurisdiction or in that region, then they can be informative for our policies as we’re developing something or taking action on particular pieces of content. But MLR, anything else that I may have missed on that?

Ghandi Emilar: I think you have spoken about a lot of things there which are really, really relevant. Just to add, I think on some of the questions that you, you know, Mike, right? You asked around, you know, capacity on both sides. I think Connor has mentioned that we have 40,000, over 40,000 people in trust and safety. But I think you can never be at a point where you’re like, we have full capacity, like we know everything. Cultural competence is very important for us to understand, but also I wanted to mention one other thing, where for us, when we engage externally, it’s also important to note that, you know, we have some external stakeholders or experts or people with lived experience, one who are willing and able to engage with us, some who are willing and unable to engage with us. And they’re unable because maybe connecting to date, you know, like internet connection is expensive or language capacity. While we have some local, you know, team members, some people who can speak the languages, but sometimes we also ensure that we either meet people where they are at, where we can, but also support, you know, connectivity, like support to engage as well. But, you know, we know sometimes when it’s once off, it doesn’t, we need to sustain it and make sure that it’s something that we are continuously be, you know, able to do. So we also look at the format of the engagement itself. In capacity for us, we need to continuously, I think, look at the context and know, you know, where we have gaps and also rely on our external experts to say, ah, that was it, you know, you could have done better in this. And we also learn a lot, not only from academics like Jeff or NGOs, but also from humanitarian organizations, because you are on the ground, you know what’s happening and you deal with people every day. And talking of sustaining these engagements, I just want to come back to you, Jeff. How can we sustain engagements with academics? Because once off really doesn’t, it’s not as meaningful as we want it to be. How can we ensure that?

Jeffrey Howard: it’s continuous. Well, I think relationships are key to the story here and making sure that there’s ongoing dialogue with the stakeholders over time. My experience participating in groups within Meta who have periodic meetings where they revisit policy areas over time is extremely useful. And of course, as those relationships develop, they are reciprocated. So I’ve been delighted to have lots of people from Meta and Oversight Board participating in events at my university. And so I think investing in those relationships are absolutely crucial here. I do have a question for Connor if I can throw it in. Connor, can I take you back to your point about content soliciting smuggling? And you talked about the fact that a lot of on-the-ground stakeholders with expertise of this issue counseled against banning that content. But in the end, you took the judgment that you should remove content soliciting smuggling.

AUDIENCE: How do I get out of Libya? How do I get to Italy, for example?

Jeffrey Howard: But you have that information page of trusted third party information. Can you talk us through how you made the decision not to defer to those on the ground who were saying, leave this content up? What was that experience like? Because it does seem to me like the right judgment, but of course, it went against what some people thought you should do. So I wondered how you made that decision.

Connor Sanchez: Yeah, that’s a great question. I mean, this was an area where I can’t say that it was neatly divided in terms of where people felt that we should go with this policy. I think everybody, first and foremost, that we spoke with recognized that this is a very, very difficult call and that there are. But I think that the picture that they painted for us was that. People who are on the move are receiving information from a wide variety of sources, and they’re making decisions on a thousand different factors. So, yes, they’re online, but they’re also in person, and they’re also in migrant shelters. They’re also speaking with relatives in their hometown before they maybe start on their journey. And they’re making these decisions on a wide variety of different information points that they receive. So I think that the thing that they wanted us to really hone in on was to think about some universal human rights standards as we approach this in terms of proportionality. We aren’t the first entity to sort of think about these challenges. There have been consultative processes in the past that we could take advantage of. And I think this comes back to Tomiwa’s point, which is the way in which we can kind of learn from international human rights legal frameworks. The protocol on human smuggling was something that we were urged to take a look at, and sort of that documents differentiation between human trafficking and human smuggling, making sure that we understood those two definitions. And then I think from our standpoint, we began to think, okay, this doesn’t necessarily need to be, we don’t need to make those distinctions necessarily on a binary decision of remove or keep up. We could still remove those and still allow for, some understanding of those who may be posting this and providing, you know, information through a safety page. So I think it’s once we kind of, that idea of providing a safety page, that meant something that we could introduce that would reduce the risk of removing. And once we went to stakeholders with that, that as an option, that was something that many of them, even the ones who were originally saying, leave up, leave up, they were at least very, very warm to the idea of at least you can provide this safety page that would serve to reduce the risk of just removing it.

Ghandi Emilar: So much, Connor. I think we only have like a minute or so. Do you want to give like closing remarks, just a minute?

AUDIENCE: Well, I think Connor, just in his wonderfully detailed answer, which gave us a real sense of how the process works, illuminated a really crucial feature of it, which is that it’s often in these policy areas, an iterative process where you go back to stakeholders with updates and they might themselves change their minds because people’s views on the topics under discussion are often not fixed and that they are the result of ongoing deliberation. And so I think one of the things that we’re taking out of this panel is the importance of having ongoing conversations like these to improve our discussions about these topics. And I’m ever so grateful to everyone for coming and for being involved in this discussion.

Ghandi Emilar: Thank you so much. I’m not sure if Tomiwa is still there. Do you want to give your closing remarks as well? Just a minute.

Tomiwa Ilori: Yes. Thank you very much, Emila. Yes. It’s a pleasure to have been here and also listening to others and the questions being asked. And I think that such conversations like this will continue to happen and that we continue to put in the work, because like you also said, Emila, I don’t think there will ever be a time where we would come to the point to say, okay, we’ve done everything that could be done regarding content moderation because, you know, issues will always crop up that needs, you know, diversified and, you know, multi-stakeholder contributions. So it’s a pleasure to be here and thank you very much. Yeah, till some other time.

Ghandi Emilar: Yeah, thank you so much to everyone who’s in the room and everyone else who joined us online as well. I know Professor Howard is still around. So for those who still want to engage with him on site, please do. And Connor, Tomiwa, thank you so much for participating in this. Bye for now. Thanks, everybody.

J

Jeffrey Howard

Speech speed

180 words per minute

Speech length

1805 words

Speech time

599 seconds

Educative role rather than activist role

Explanation

Academics should adopt an educative approach when engaging with platforms, rather than an activist one. The goal should be to inform and empower platforms with frameworks and insights, not to push a particular viewpoint.

Evidence

Comparison to teaching methods in academic classes, where professors present various perspectives rather than just their own.

Major Discussion Point

Role of academics in platform policy development

Differed with

Tomiwa Ilori

Differed on

Role of academics in platform policy development

Providing frameworks and insights rather than pushing preferred views

Explanation

Academics should focus on supplying platforms with insights and frameworks to make informed decisions. The role is not to tell platforms what to do, but to equip them with the tools to make their own judgments.

Evidence

Example of presenting philosophical theories about threats and legal philosophy concepts to Meta for their policy on violent threats.

Major Discussion Point

Role of academics in platform policy development

Importance of judgment by platforms themselves

Explanation

Platforms must ultimately make their own judgments on ethical questions. While academics can provide insights, the final decision and responsibility lie with the platform.

Evidence

Analogy to personal moral decisions, where relying solely on others’ opinions would be intellectually and morally unserious.

Major Discussion Point

Role of academics in platform policy development

Engaging with platforms is intellectually generative for academics

Explanation

Academics should engage with platforms as it can be intellectually stimulating and help identify new research topics. It also provides an opportunity to make a practical difference through academic work.

Major Discussion Point

Role of academics in platform policy development

Agreed with

Conor Sanchez

Ghandi Emilar

Agreed on

Ongoing and sustained engagement

C

Conor Sanchez

Speech speed

130 words per minute

Speech length

2576 words

Speech time

1181 seconds

Consulting wide range of experts on policy development

Explanation

Meta engages with a diverse group of experts when developing policies. This includes academics, NGOs, human rights experts, and individuals with lived experiences relevant to the policy area.

Evidence

Examples of consulting experts for crisis policy protocol and functional identification process.

Major Discussion Point

Meta’s stakeholder engagement process

Agreed with

Jeffrey Howard

Tomiwa Ilori

Agreed on

Importance of diverse stakeholder engagement

Inclusive, expertise-based, and transparent engagement principles

Explanation

Meta’s stakeholder engagement process is guided by principles of inclusivity, expertise, and transparency. They aim to include a wide range of perspectives and build trust with stakeholders.

Major Discussion Point

Meta’s stakeholder engagement process

Case studies of expert input impacting policies

Explanation

Meta provided examples of how expert consultations have directly influenced policy decisions. This demonstrates the practical impact of stakeholder engagement on platform governance.

Evidence

Examples of crisis policy protocol, functional identification process, and human smuggling content policy.

Major Discussion Point

Meta’s stakeholder engagement process

Agreed with

Jeffrey Howard

Ghandi Emilar

Agreed on

Ongoing and sustained engagement

Balancing different stakeholder perspectives in decision-making

Explanation

Meta considers various stakeholder perspectives when making policy decisions. They aim to balance different viewpoints and potential risks in their final policy choices.

Evidence

Example of decision-making process for policy on human smuggling content.

Major Discussion Point

Meta’s stakeholder engagement process

Language and cultural differences in moderation

Explanation

Content moderation faces challenges due to language and cultural differences. Platforms need to consider not just language translation but also cultural nuances and colloquial expressions.

Evidence

Meta’s content moderation in 70 languages, fact-checking in 60 languages, and addressing coordinated inauthentic behavior in 42 languages.

Major Discussion Point

Challenges in content moderation

T

Tomiwa Ilori

Speech speed

139 words per minute

Speech length

322 words

Speech time

138 seconds

Meaningful collaboration with credible human rights institutions

Explanation

Platforms should collaborate with established and credible human rights institutions to identify human rights issues. This involves working directly with organizations that have local expertise and presence.

Major Discussion Point

Human rights-centered approach to content moderation

Agreed with

Jeffrey Howard

Conor Sanchez

Agreed on

Importance of diverse stakeholder engagement

Centering victims and lived experiences

Explanation

Platforms should focus on including the voices of victims and those with lived experiences in their policy development process. This ensures that the impact on vulnerable persons is considered.

Major Discussion Point

Human rights-centered approach to content moderation

Differed with

Jeffrey Howard

Differed on

Role of academics in platform policy development

Bottom-up content governance approach

Explanation

Platforms should adopt a bottom-up approach to content governance by working with key actors and experts in specific domestic contexts. This provides more contextual nuances and understanding of local issues.

Major Discussion Point

Human rights-centered approach to content moderation

Agreed with

Jeffrey Howard

Conor Sanchez

Agreed on

Importance of diverse stakeholder engagement

Increasing access to platform data for independent research

Explanation

Platforms should provide more access to their data for independent researchers, especially in underserved contexts. This allows for better understanding of how platforms shape critical aspects of human rights challenges.

Major Discussion Point

Human rights-centered approach to content moderation

M

Mike Walton

Speech speed

189 words per minute

Speech length

170 words

Speech time

53 seconds

Capacity and resource constraints

Explanation

There are concerns about the capacity of platforms to implement bottom-up approaches and support a wide range of languages in content moderation. This raises questions about resource allocation and infrastructure.

Major Discussion Point

Challenges in content moderation

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Difficulty accessing platforms for local stakeholders

Explanation

Local stakeholders, such as researchers or NGOs, often face challenges in reaching out to platforms like Meta to engage, ask questions, or raise issues. This difficulty in access can hinder effective local engagement.

Evidence

Example from Iraq where it’s difficult for local stakeholders to reach Meta experts.

Major Discussion Point

Challenges in content moderation

Ghandi Emilar

Speech speed

154 words per minute

Speech length

1347 words

Speech time

521 seconds

Sustaining ongoing engagement with academics and experts

Explanation

There is a need to sustain continuous engagement with academics and experts, rather than relying on one-off interactions. This ongoing dialogue is crucial for meaningful policy development and improvement.

Major Discussion Point

Challenges in content moderation

Agreed with

Jeffrey Howard

Conor Sanchez

Agreed on

Ongoing and sustained engagement

Agreements

Agreement Points

Importance of diverse stakeholder engagement

Jeffrey Howard

Conor Sanchez

Tomiwa Ilori

Consulting wide range of experts on policy development

Meaningful collaboration with credible human rights institutions

Bottom-up content governance approach

All speakers emphasized the importance of engaging with a diverse range of stakeholders, including academics, NGOs, human rights experts, and individuals with lived experiences, to inform platform policy development.

Ongoing and sustained engagement

Jeffrey Howard

Conor Sanchez

Ghandi Emilar

Engaging with platforms is intellectually generative for academics

Case studies of expert input impacting policies

Sustaining ongoing engagement with academics and experts

Speakers agreed on the need for continuous, sustained engagement between platforms and external experts to ensure meaningful policy development and improvement.

Similar Viewpoints

Both speakers emphasized the importance of considering multiple perspectives and frameworks in policy development, rather than pushing for a single preferred view.

Jeffrey Howard

Conor Sanchez

Providing frameworks and insights rather than pushing preferred views

Balancing different stakeholder perspectives in decision-making

Both speakers highlighted the importance of inclusivity and considering the perspectives of those directly affected by platform policies.

Conor Sanchez

Tomiwa Ilori

Inclusive, expertise-based, and transparent engagement principles

Centering victims and lived experiences

Unexpected Consensus

Challenges in content moderation across languages and cultures

Conor Sanchez

Mike Alton

Unknown speaker

Language and cultural differences in moderation

Capacity and resource constraints

Difficulty accessing platforms for local stakeholders

There was an unexpected consensus on the significant challenges faced in content moderation across different languages and cultures, including resource constraints and difficulties in local engagement. This highlights a shared recognition of the complexity of global content moderation.

Overall Assessment

Summary

The main areas of agreement centered around the importance of diverse stakeholder engagement, the need for sustained and ongoing collaboration between platforms and experts, and the recognition of challenges in global content moderation.

Consensus level

There was a moderate to high level of consensus among the speakers on the fundamental principles of stakeholder engagement and policy development. This consensus suggests a shared understanding of the complexities involved in platform governance and the importance of collaborative approaches. However, the discussion also revealed ongoing challenges, particularly in implementing these principles across diverse global contexts, which may require further exploration and innovative solutions.

Differences

Different Viewpoints

Role of academics in platform policy development

Jeffrey Howard

Tomiwa Ilori

Educative role rather than activist role

Centering victims and lived experiences

Jeffrey Howard argues for an educative approach where academics provide frameworks and insights, while Tomiwa Ilori emphasizes the importance of including victims’ voices and lived experiences in policy development.

Unexpected Differences

Transparency in stakeholder engagement

Conor Sanchez

Tomiwa Ilori

Inclusive, expertise-based, and transparent engagement principles

Increasing access to platform data for independent research

While both speakers discuss transparency, there’s an unexpected difference in their approach. Connor emphasizes Meta’s existing transparency in engagement, while Tomiwa calls for increased access to platform data for independent researchers, suggesting a gap in current transparency practices.

Overall Assessment

summary

The main areas of disagreement revolve around the role of academics in policy development, the extent of stakeholder engagement, and the level of transparency in platform operations.

difference_level

The level of disagreement is moderate. While there are some fundamental differences in approach, particularly between academic and platform perspectives, there is also significant common ground in recognizing the importance of expert input and stakeholder engagement. These differences highlight the complexity of developing content moderation policies that balance various stakeholder interests and human rights principles.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of engaging diverse stakeholders, but Conor focuses on Meta’s existing process of consulting various experts, while Tomiwa advocates for a more bottom-up approach that emphasizes local context and actors.

Conor Sanchez

Tomiwa Ilori

Consulting wide range of experts on policy development

Bottom-up content governance approach

Similar Viewpoints

Both speakers emphasized the importance of considering multiple perspectives and frameworks in policy development, rather than pushing for a single preferred view.

Jeffrey Howard

Conor Sanchez

Providing frameworks and insights rather than pushing preferred views

Balancing different stakeholder perspectives in decision-making

Both speakers highlighted the importance of inclusivity and considering the perspectives of those directly affected by platform policies.

Conor Sanchez

Tomiwa Ilori

Inclusive, expertise-based, and transparent engagement principles

Centering victims and lived experiences

Takeaways

Key Takeaways

Academics should play an educative rather than activist role in platform policy development, providing frameworks and insights rather than pushing preferred views

Meta engages a wide range of stakeholders and experts in its policy development process, aiming for inclusivity, expertise, and transparency

A human rights-centered approach to content moderation should involve meaningful collaboration with credible institutions, centering victims’ experiences, and adopting bottom-up governance

Content moderation faces significant challenges related to language, cultural differences, capacity constraints, and sustaining ongoing engagement with experts

Resolutions and Action Items

Meta to continue engaging diverse stakeholders and experts in policy development

Platforms to increase access to data for independent researchers, especially in underserved contexts

Meta to expand language capabilities for content moderation and fact-checking

Unresolved Issues

How to effectively scale bottom-up content governance approaches

How to balance conflicting stakeholder perspectives in policy decisions

How to improve accessibility for local stakeholders to engage with platforms

How to sustain long-term engagement with academics and experts

Suggested Compromises

Removing content soliciting human smuggling services while providing a safety page with immigration information

Balancing removal of potentially harmful content with providing alternative resources or information

Thought Provoking Comments

I think we should reject the activist conception of the role of academics and stakeholder engagement, and I think we should reject it because it diminishes the distinctive role that I think academics can play in this process because it eliminates the distinction between the role academics can play and the role that other stakeholders can play.

speaker

Jeffrey Howard

reason

This comment challenges the common view of how academics should engage with platforms and proposes a different model focused on education rather than advocacy.

impact

It shifted the discussion to focus on the unique role academics can play in providing frameworks and insights rather than just pushing for specific policies. This led to further exploration of how platforms can best utilize academic expertise.

Our role isn’t just to tell platforms what we think the right answer is as we see it, as if platforms were counting votes among stakeholders. And by the way, even if platforms were counting votes among stakeholders, it’s not clear academics should get a vote since we’re not really stakeholders, we’re not particularly affected by policies in the way particular constituents are.

speaker

Jeffrey Howard

reason

This insight reframes the role of academics from advocates to educators, highlighting that their value comes from knowledge rather than representing a constituency.

impact

It prompted reflection on how platforms should weigh different types of input and expertise in their decision-making processes. It also set up the later discussion of how Meta actually incorporates academic and expert input.

What I mean by that is working with key actors and experts in specific domestic contexts, such as national human rights institutions, civil society, and academics. This provides more contextual nuances and understanding of the issues on the ground, how these actors are currently thinking about them, and how exactly platforms can learn from their impact on the ground.

speaker

Tomiwa Ilori

reason

This comment emphasizes the importance of local context and on-the-ground expertise, which adds nuance to the earlier discussion of academic input.

impact

It broadened the conversation beyond just academic input to consider a wider range of stakeholders and expertise. This led to further discussion of how Meta engages with diverse stakeholders globally.

Can you talk us through how you made the decision not to defer to those on the ground who were saying, leave this content up? What was that experience like? Because it does seem to me like the right judgment, but of course, it went against what some people thought you should do.

speaker

Jeffrey Howard

reason

This question probes into the actual decision-making process at Meta, moving the discussion from theoretical to practical considerations.

impact

It prompted a detailed explanation from Connor about how Meta balances different expert opinions and stakeholder input in making policy decisions. This provided concrete insight into Meta’s policy development process.

Overall Assessment

These key comments shaped the discussion by moving it from theoretical considerations of academic engagement to practical exploration of how platforms like Meta actually incorporate diverse expert and stakeholder input. The conversation evolved from defining ideal roles for academics to examining the complexities of balancing different perspectives and local contexts in global policy decisions. This progression provided a more nuanced and realistic picture of the challenges and processes involved in platform governance and content moderation policy development.

Follow-up Questions

How can platforms increase capacity to support content moderation in a wide breadth of languages?

speaker

Mike Walton

explanation

This is important to ensure effective content moderation across diverse linguistic contexts and to implement a bottom-up approach to policy development and enforcement.

How can AI be effectively used to help moderate content across languages?

speaker

Mike Walton

explanation

This explores potential technological solutions to the language capacity issue in content moderation, while acknowledging existing doubts about AI’s effectiveness.

How can Meta improve engagement with stakeholders at the local level, particularly in countries like Iraq?

speaker

Adnan

explanation

This addresses the difficulty some local researchers and NGOs face in reaching Meta experts to engage on issues or ask questions, which is crucial for effective local stakeholder engagement.

How can platforms sustain meaningful engagements with academics over time?

speaker

Ghandi Emilar

explanation

This is important to ensure that academic input into platform policies is continuous and not just a one-off engagement, leading to more robust and informed policy development.

How can platforms better identify and work directly with institutions that have ‘boots on the ground’ regarding content moderation in specific contexts?

speaker

Tomiwa Ilori

explanation

This is crucial for ensuring that content moderation policies are informed by local expertise and context-specific knowledge.

How can platforms increase access to platform data for independent research, especially in underserved contexts such as the majority world?

speaker

Tomiwa Ilori

explanation

This is important for enabling more comprehensive research on how platforms shape critical aspects of human rights challenges in diverse global contexts.

What should be the best standard practice regarding imported content moderation labels?

speaker

Tomiwa Ilori

explanation

This area requires further research to establish effective practices for content moderation across different cultural and linguistic contexts.

In what ways can platforms fund or support objective institutional research without impeding their independence or credibility?

speaker

Tomiwa Ilori

explanation

This is important for ensuring that research on platform policies and impacts remains independent and credible while still benefiting from platform support and data access.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.