Open Forum #76 Digital for Development: UN in Action

17 Dec 2024 10:15h - 11:15h

Open Forum #76 Digital for Development: UN in Action

Session at a Glance

Summary

This panel discussion at the IGF focused on the challenges of disinformation in the digital age and strategies to combat it. Experts from various sectors, including law enforcement, academia, tech companies, and civil society, shared insights on the impacts and mitigation of disinformation.


The discussion highlighted the distinction between disinformation (intentionally false information) and misinformation (unintentionally spread false information). Panelists emphasized that disinformation erodes trust in institutions and can have significant economic impacts, with millions of dollars potentially disrupted in global markets due to false information.


Technological solutions were discussed, including AI-powered detection systems and fact-checking partnerships employed by platforms like Meta. However, panelists stressed that technology alone is insufficient, and a multi-stakeholder approach involving government, industry, civil society, and users is necessary.


The importance of digital literacy and critical thinking skills was emphasized as crucial for users to discern credible information. Panelists also noted the need for harmonized legal frameworks and international cooperation to address the cross-border nature of disinformation.


The discussion touched on the cyclical nature of disinformation, with increased activity during crises or elections. Panelists agreed that while efforts may need to be scaled up during these periods, a consistent, long-term approach to combating disinformation is essential.


The panel concluded by emphasizing the role of good digital citizenship and the need for all stakeholders to actively participate in creating a safer, more trustworthy online environment. The discussion underscored the complexity of the disinformation challenge and the importance of collaborative, multifaceted solutions.


Keypoints

Major discussion points:


– The definition and impacts of disinformation vs. misinformation


– How disinformation affects trust in institutions and the economy


– Approaches and tools used by platforms to combat mis/disinformation


– The role of civil society and digital literacy in addressing the issue


– Governance efforts and regulations to tackle disinformation


Overall purpose:


The goal of this discussion was to explore the complex challenges posed by disinformation in the digital age, examining its impacts across society and discussing multi-stakeholder approaches to address the issue.


Tone:


The tone was primarily informative and collaborative, with panelists offering insights from their respective areas of expertise. There was a sense of shared concern about the impacts of disinformation, balanced with cautious optimism about potential solutions. The tone became slightly more urgent when discussing the need for coordinated action, but remained constructive throughout.


Speakers

– Judith Espinoza: Governance specialist with the World Economic Forum focusing on frontier technologies


– Madan Oberoi: From Interpol


– Dr. Hoda A. Alkhzaimi: Co-founder and director of the Center for Cybersecurity at NYU Abu Dhabi


– Agustina Callegari: Leading safety at the World Economic Forum


– David Sullivan: Executive Director of the Digital Trust and Safety Partnership


– Monica Gizzi: Head of public policy in Brazil for Integrity at Meta


– Saeed Al Dhaheri: Director of the Centre for Future Studies at the Dubai University, President of the UAE Robotics and Automation Society, Member of eSAFE (Emirates Safer Internet Society)


Additional speakers:


– Stephanie: Director of public policy in Brazil for data integrity at MEDA (mentioned but did not speak)


– Balthazar: From Cyber Internet Lab in Indonesia (audience member who asked a question)


– Andrew Campling: Trustee for the Internet Watch Foundation (audience member who asked a question)


Full session report

Revised Summary of IGF Panel Discussion on Disinformation


This panel discussion at the Internet Governance Forum (IGF) brought together experts from various sectors to explore the complex challenges posed by disinformation in the digital age. The conversation examined the impacts of disinformation across society and discussed multi-stakeholder approaches to address the issue.


Introduction and Panelist Backgrounds


The panel was moderated by Judith Espinoza and included:


– Madan Oberoi, Executive Director of Technology and Innovation at Interpol


– Dr. Hoda A. Alkhzaimi, Director of the Center for Cyber Security at New York University Abu Dhabi


– David Sullivan, Executive Director of the Digital Trust and Safety Partnership


– Monica Gizzi, Public Policy Manager for Content Regulation at Meta


– Saeed Al Dhaheri, Director of the Center for Future Studies at the University of Dubai


– Agustina Callegari, Senior Advisor on Internet Trust and Safety at the Internet Society


Definition and Impacts of Disinformation


Madan Oberoi provided a clear definition of disinformation as “false and misleading and manipulated synthetic information, which is created for the purpose of cheating, for the purpose of harming, for the purpose of wrongfully influencing the opinion.” This definition helped distinguish between disinformation (intentionally false information) and misinformation (unintentionally spread false information).


The panelists highlighted several significant impacts of disinformation:


1. Trust Erosion: Both Madan Oberoi and David Sullivan emphasized how disinformation erodes trust in institutions and between individuals. Sullivan defined trust as “the firm belief in the reliability, truth, or ability of someone or something.”


2. Economic Impacts: Dr. Hoda A. Alkhzaimi provided a striking example of the economic consequences of disinformation, citing a 2013 incident where a single tweet led to a 140-point drop in the stock market, resulting in a $136 billion loss in a single day.


3. Political and Social Disruption: David Sullivan discussed how disinformation undermines political and social institutions.


4. Emotional Manipulation: Monica Gizzi highlighted the emotional nature of disinformation and its ability to manipulate users.


5. Harm to Vulnerable Groups: Andrew Campling, an audience member from the Internet Watch Foundation, raised concerns about the harm disinformation can cause to vulnerable groups, particularly children.


Approaches to Combating Disinformation


The panel discussed various approaches to address the challenge of disinformation:


1. Multi-jurisdictional Cooperation: Madan Oberoi emphasized the need for legal framework harmonization across borders to address the cross-border nature of online information.


2. Technological Solutions:


– Monica Gizzi discussed Meta’s use of algorithmic detection and partnerships with over 100 fact-checking agencies globally. She explained how Meta uses “friction” to slow the spread of misinformation by adding warning labels and reducing distribution of potentially false content.


– Madan Oberoi mentioned the potential of AI and blockchain for verification.


– Dr. Hoda A. Alkhzaimi stressed the need for transparent trust indicators across platforms.


3. Digital Literacy Initiatives: Saeed Al Dhaheri shared the UAE’s multifaceted approach, including the launch of a digital wellbeing online platform in 2018 to promote positive and safe digital usage for children, parents, and society at large. He also mentioned the UAE’s efforts in developing AI-powered tools to detect fake news and disinformation.


4. Civil Society Involvement: David Sullivan highlighted the role of civil society in research and advocacy, emphasizing the importance of increasing transparency in trust and safety operations without dictating content policies to companies. He also discussed the Digital Trust and Safety Partnership’s work in developing best practices for digital platforms.


5. Platform Accountability: Andrew Campling raised the question of whether civil society groups should demand more from governments to mandate better standards for protecting citizens from online harm.


6. Global Coalition for Digital Safety: Agustina Callegari mentioned the work of this coalition, which has developed a typology of online harms to help address various forms of harmful content, including disinformation.


Challenges in Addressing Disinformation


The discussion also touched on several challenges in combating disinformation:


1. Distinguishing Misinformation from Disinformation: Monica Gizzi and other panelists highlighted the difficulty in differentiating between intentional and unintentional spread of false information.


2. Cross-border Nature of Online Information: Madan Oberoi pointed out the challenges posed by the global reach of digital platforms.


3. Balancing Free Speech with Harm Prevention: David Sullivan and Monica Gizzi both acknowledged the complexity of addressing disinformation while preserving free speech.


4. Increased Prevalence During Crises: An audience member, Balthazar, raised the question of whether efforts should be differentiated between periods of crisis and normal times. Panelists agreed that while heightened vigilance is necessary during crises, ongoing efforts are crucial.


5. Monetization of Disinformation: Andrew Campling highlighted concerns about the financial incentives for spreading disinformation on platforms.


Audience Questions and Closing Remarks


The audience engagement section included questions about platform accountability, differentiated approaches during crises, and the monetization of disinformation. Panelists emphasized the need for ongoing efforts and multi-stakeholder collaboration to address these complex issues.


In her closing remarks, Judith Espinoza emphasized the importance of good digital citizenship and the need for all stakeholders to actively participate in creating a safer, more trustworthy online environment. She stressed that combating disinformation is a collective responsibility that requires continuous effort and adaptation to evolving challenges.


Session Transcript

Judith Espinoza: Hello, welcome. Thank you so much for joining us today on the second day of the IGF. My name is Judith Espinoza. I’m a governance specialist with the World Economic Forum focusing on frontier technologies. And this session is hosted in coordination with the Global Coalition for Digital Safety, as well as the Defining and Building the Metaverse Initiative couched at the World Economic Forum. It is my pleasure to welcome you all, both online and virtual. We have an incredible cast with us here today, and I’m going to go ahead and introduce them. Right to my left I have Dr. Madan Oberoi, who is joining us from Interpol. To his left, I have Dr. Hodal Kazemi, co-founder and director of the Center for Cybersecurity at NYU Abu Dhabi. And to her left, I have Stephanie, who is joining us from MEDA, who is the director of public policy in Brazil for data integrity. And I’ll go ahead and I’ll turn it over to my colleague, Agustina Caligari, who is joining us remotely all the way from Argentina today, who unfortunately could not be here in person, who’s going to go ahead and introduce our panel, who is joining us virtually. Agustina, go ahead.


Agustina Callegari: Can you hear me well? Hi, everyone. I hope you can hear me well.


Judith Espinoza: Yeah, go ahead. I don’t know if you are able to hear. Ah,


Agustina Callegari: yeah, you are able to hear. Hi, everyone. It’s a pleasure to be there at this online. My name is Agustina Caligari. I’m leading the safety at the World Economic Forum. And well, we put together this session to discuss the challenges related to this information. And online, we have David Sullivan. David, I saw you there. He’s the executive director of the Trust and Safety Partnership. And we also have Sadid Al-Herin, if I’m not mistaken, and that is joining us online. Once we get there, you will get the chance to introduce yourself. Thank you, Judith. Back to you on site.


Judith Espinoza: Wonderful. Thank you so much. So today, we’re covering the very important topic of disinformation. As you all well know, disinformation has been around for a very, very long time. It did not come with the start of the internet, but it’s certainly evolving with it and changing along with the times. So today we want to focus on how this is being exacerbated by frontier technologies, how it can be mitigated by those same causes and effects. And maybe we can start with a question for Dr. Oberoi. I want to ask you, you know, you see a lot of these issues at Interpol, especially from a multi-jurisdictional perspective. I want to ask, what are the real dangers with disinformation? How do you tackle this, especially when information travels through physical boundaries, right? And the internet is not something that can be limited to sort of physical boundaries or traditional geopolitical lines that you would see like a harm that would be in person. How do you tackle this? Who is being affected and who are the perpetrators of disinformation?


Madan Oberoi: Thank you. Thank you, Judith. Disinformation by definition is a false and misleading and manipulated synthetic information, which is created for the purpose of cheating, for the purpose of harming, for the purpose of wrongfully influencing the opinion. So the very intent is wrong. Now, as you rightly mentioned, the new and emerging technologies are changing the entire landscape in terms of the speed at which and realistic synthetic media, which they are able to create, also the speed at which they can disseminate and the reach is multiplying manifold. So in light of this, one of the biggest casualties and one of the biggest dangers is, I would say, trust. That is, I think, the biggest factor which impacts everyone. I think David would be talking more about it in the… when his turn comes. But for us, trust in institutions, trust between individual, trust between citizens, public, everything gets impacted. And for a multilateral organization like Interpol, where our core basis is in international police collaboration, trust does make a big difference. So we are impacted in a big way in that sense. In other sense, I would say the other impact would be on security. Security and democracy also, in that sense, the way media can be used for delegitimizing some of the institutions, for influencing wrongfully some elections. So all those things are getting impacted. And the most important one would be the harm which is created through hate crimes, which is an outcome of more polarized society and more polarized individuals. And that is really who is behind that, who are the people who do it. I would say anybody who wants to gain or wants to manipulate and is looking for maybe power, maybe influence, maybe is going can be behind this. But in terms of categories, I would say we have seen state actors. We have seen non-state actors, including terror organizations. We have also seen criminals doing it for maybe manipulation of markets, maybe financial crimes, hate crimes. Those are also important. And I would also say that, to an extent, content amplifiers are also in this space. So that ought to be taken care of. What needs to be done, and you rightfully mentioned about the multi-jurisdictional frame of reference, which needs to be taken care of. And this is true for all kind of digital crimes which happen. And since they are multi-jurisdictional, the problem. It comes in terms that criminals do not have any jurisdictional restrictions. But when it comes to law enforcement, we are bound by that. And that brings in a lot of bureaucratic procedures and procedures which need to be followed to look at that. And therefore, difference in legal frameworks also is a factor there. So what we need is, firstly, some sort of legal framework harmonization. Second is a platform for interaction. And the various multilateral bodies like Interpol can provide that platform for interacting between different stakeholders and having a more harmonized policy to do that. And in other terms, one of the basic reasons would be that why we are not able to do that much against these disinformation campaigns is in terms of attribution and bringing consequences on people who are responsible for, which is one of the biggest problem in terms of attribution, who was the one who initiated this, and then taking action against that. So that is an important aspect. And then there’s the issue of platforms also. I think we’ll be seeing Monica responding to that part in terms of the liabilities of platforms and what needs to be done by platforms to make sure that they are not held responsible for that. That is an important factor. There are different reasons and how technology can help us in that, whether it is AI, whether it’s any other form in terms of maybe using blockchains for verifying facts, et cetera. Those are different technologies that are being talked about. So those should be and can be used for these purposes. I would also say in terms of community engagement, in terms of digital literacy, educating the users about what they need to. be careful about would also be an important factor for handling this kind of. So I’ll pause here and later see if there are any questions which I would like to answer.


Judith Espinoza: Thank you so much. No, I think that was a good cover. I want to touch on something that you said, which is there is an intent, right? There has to be intent to harm. And I think that this is a place where we can make the important distinction between disinformation and misinformation. So citizens can regularly be misinformed, right? I can be misinformed. I can read something online. I can repost or reshare a story that is factual. And my intent isn’t to harm, right? So there is also, I think, the evolution of these technologies. We see that there’s an interesting merge of the two, right? Who is disinforming people versus who is misinforming people? And those two, I think, go hand in hand. But I think that’s a perfect jumping off point. And you also talked about trust and how that affects institutions. And I think that’s a perfect point to bring Dr. Hoda out to see me here. So Dr. Hoda, I want to ask you, you do a lot of work researching and advising both industry across multiple platforms for this. And I want to ask you, what do you think the implications on the economy of misinformation in global markets, financial, and digital trade systems?


Dr. Hoda A. Alkhzaimi: I think the implications are quite high. Because when we are looking at, you have just said, Judith, to the intent versus the lack of intent within the ecosystems, we have a different scale of harm that is being propagated across different financial markets and affecting supply chain and, thus, trade in general. When we’re talking about creating platforms that lacks the ability to check real-time reliability of information and accuracy and integrity of the facts that are being generated, and allowing those platforms to, I would say, mass distribute eco-champers of false narratives without actually realizing that those false narratives are being communicated, it takes a toll on different valuation structures of business markets. Because we know, in current day and age, we have a lot of market volatility that’s being tied into intangible assets into the business. And those intangible assets could be the brand of the business. It could be, as well, the stock market positioning at a certain point of time, where a single tweet, as we have seen in the past, can lead to either abrasing the business or devaluating the market value of the business. In 2013, I think there was a tweet about a harm that’s being initiated in the White House that led to 140 points drop in the stock market. This 140 points led to $136 billion loss in a single day within minutes. And when you are trying to interpolate that to a greater granular framework, you could see that, on an annual basis, stock markets are losing something around 39 billion USDs based on a study that’s being issued by University of Baltimore and Oxford. And when you go further into trying to analyzing the cross-sectorial impact on maybe merchandise and brands that exist online and efficacy of a trust, because at the end of the day, most of this information leads to disrupting trust models and its users and its different stakeholders that are interacting on e-commerce platforms, for example. You might sway away from using specific e-commerce platform, and this is going to disrupt the global trade in general and affect maybe. at a later stage, the supply chain like what we’ve seen in COVID when we have heard that there would be a lack of medicine or a lack of food and people tended to go to panic shopping and panic stocking of supplies and this led to shortages at certain stages and I think we will be able to see this happening more often across the platforms and it would lead to disruption that is harming the global trade and billions and the supply chain and billions. Our question in here is how can we use this data to safeguard stock markets because we can’t change the current valuation model as fast as we want. We do have some kind of digital assets like cryptocurrencies for example which is highly volatile and highly targeted to this kind of scale of misinformation. So in order to do this we have the pathway to make sure that the trust indicators are being very communicated in a transparent way across the platform so people will know if they should trust a piece of information that comes into the platform and to do this we need to incorporate many kind of different layers of algorithmic stacking. I think we are still trying with federated learning, with zero knowledge proofs, with implementing trust models that are not in the map at the moment. Maybe for the lack of reduced computational complexity across the different platforms but we’re still trying to improve it and that is going to add on the longer run to building maybe a better stack of trust models across the map and I agree that we are having differently a global gap that comes to governance harmonization of what should be developed on different digital platforms or digital kind of units that are leading to creating and generating and communicating this information across the map and we need in today’s day and age to find a way where it’s part of the design structure or the design ethos of any platform or social platform or any type of communication platform across the map to provide for certain authenticity or integrity measures to be communicated to the public.


Judith Espinoza: Millions of dollars being disrupted in the global economy because of disinformation. I think that This is a good transition here to Monica, and I have to apologize. I used your middle name when I introduced you. But platforms are a way to spread disinformation, right? Because they’re sort of a, it’s spread vis-a-vis something. It’s not just sort of ephemeral. But there’s so many sources of disinformation that this must be a challenge, right? For platforms to be able to tackle or to begin to uncover where these sources are. So I wanna ask you, what are some of the mitigating tools that platforms are using to combat disinformation or misinformation, a combination of both, perhaps? And how do you approach this? I mean, you have millions of users worldwide, right? How do you detect these threats? And what are some of the ways that you really tackle these challenges, right? That are really social and risk?


Monica Gizzi: Thank you, Judith. Hi, everyone. I’m Monica Gizzi. I’m head of public policy in Brazil for Integrity. And one of the areas in which I’ve been focusing a lot of my attention and efforts over the past eight years at Meta is exactly misinformation and disinformation. I love it that you make the difference between these two terms. It’s very important that we understand that wherever there is disinformation, there is intent. You have bad actors behind the scenes that wish to harm people somewhere or wish to gain some type of profit with that. And then there’s misinformation, right? And that’s where people have access to the disinformation that was created by the first actors, but they don’t intend to harm. They believe what they’re seeing, right? And they’re spreading that. In a big tech company like Meta, we of course have to tackle both of these sources of this type of problem. On the edge of disinformation, we have invested billions of dollars over the past 10 years to really work against these actors. One way that bad actors spread disinformation is through fake accounts. We have an exceptional team of experts and we have a team of experts who have been working on this for a long time. And we have a team of experts who have been working on this for a long time. And we have a team of experts and they work hand-in-hand with the very best technology. So I’m talking about artificial intelligence, trying to identify fake accounts and take them down before they can even exist, even for a few seconds in our platforms. We have worked in the past eight years to tackle coordinated inauthentic behavior actions in our platforms, thus disrupting coordinated attempts to create disinformation and spread misinformation. Actors are, bad actors are very good at doing this. So we are constantly having to evolve and constantly having to invest in order to be able to do that. We have transparency reports, which we publish regularly and our transparency reports, they now include the numbers of accounts of fake accounts that we take down. And it’s amazing to see that we’re taking millions of accounts down every single day, fake accounts. And that’s the work that no one sees, right? Because it’s not causing harm as we’re tackling that before the harm can even happen. So that’s a behind the scenes work that I’m so happy to be able to talk about because no one really gets to know about them. But of course, something always escapes. And we also have that part of our users that are misinformed and that are spreading misinformation because they believe in it. One of the characteristics of misinformation is that we’re usually working with catchy openings, catchy phrases. And I truly believe on a personal level that misinformation speaks to people’s hearts and emotions. It’s not something very rational. People see a headline and that speaks to something somewhere inside them that they believe and they’re happy to see it, right? And they are eager to spread that to their friends, to their families. So tackling the misinformation part of the problem through users that are somehow getting this information and then spreading that is also another important part of our work. So we have several actions tackling that front, but I think one of the most important ones is the work that we do around the globe with fact-checking agencies. We work with over 100 fact-checking agencies across the globe. They’re covering pretty much every single language of the jurisdictions in which we offer our services. In Brazil alone, huge country, we work with six different fact-checking agencies. And the way it works is our artificial intelligence, because of course we’re talking about a huge volume, right? So it would be humanly impossible to tackle misinformation online if we didn’t have the technology on our side. So our algorithms, they’re constantly in our platforms looking for signs that something could potentially be misinformation. So for instance, if I post something on my Instagram or my Facebook, and a lot of people comment behind or underneath that post, oh my God, I don’t believe that, right? I don’t, oh my God, I don’t believe that can be, I really believe that or I don’t believe it. I think it’s fake, right? That’s a sign to our algorithms that something might be wrong. If Judith, you go to my post and you report it as fake news, that’s one possibility. That’s a very strong sign to our algorithms. And they’re looking at millions of signs, right? So whenever they reach a certain degree of certainty that that material could be potentially misinformation, they’re going to send that material onto a queue that is accessed by our third-party fact-checkers partners. and then they choose the materials and the links, the news, whatever, the memes. It’s not only a post, but it’s a link to a news publication. It can be a photograph. And they check that if that information is false or if that information is accurate. And then they label it. And they send that signal back to META. And then if something is reported as false by our partners, that information is significantly reduced in the feed of all of our social media applications. And I’m going to see a filter in front of that information. I’m no longer going to be able to see that information straightforward, but there will be a filter. It’s very similar to the graphic content filter. And it says you have to click. And it says this piece of content has been rated as false. Are you sure you want to access it? In the technology jargon, we call it adding friction to a certain content. And that means that the user has to take additional steps to access that information. Most people stop when they see that filter. Because most people don’t actually realize that some misinformation is misinformation. But even if the user chooses to access that information, sometimes the user wants to know. And I think it’s fair to say that people need information on what is going on in their networks. And that is false. And sometimes they wish to warn people. I might, for example, have seen other people talking about that specific piece of content. And I want to warn them that that’s misinformation. I can still access that. And then if I want to share it, a pop-up will come up. And it will say this piece of content has been rated as false. Are you really sure you want to share it? Right? So, we’re not deleting that content because we feel that it’s important for people to know that that content is false. We’re also giving the user the knowledge, the power of the knowledge and the tools to let other people also know that that information is not accurate, not correct. So, I know we have very limited time, Judith. So, I’m not going to go any further. But this is one of the very important steps and actions that we take towards combating disinformation and misinformation.


Judith Espinoza: Thank you so much. I think that was very comprehensive. And I think I really appreciate what you said, right? Which is disinformation is targeted and it does so in an emotional way, right? You really want to pry on people’s vulnerabilities and it’s humanly impossible to be able to look at every single case of mis or disinformation together, which is good, right? We want to make sure that our technologies are working, that our technologies are not just being weaponized to harm people or the intent to harm, but that we’re harnessing them to sort of combat these things as well. And this is a good place to pass it over to my colleague, Agustina Caliguieri, who is joining us virtually. I’ll go ahead and pass the floor to you, Agustina.


Agustina Callegari: Thank you, Trinit. Well, yeah, I think that continuing with what Madame was saying about the importance of trust, I want to ask a question to David. That is, how does disinformation affect trust in institutions and how can civil society support efforts to combat disinformation? So David, over to you.


David Sulivan: Thank you, Agustina. Thank you, Judith, and to everybody in the room today. I’m David Sullivan, Executive Director of the Digital Trust and Safety Partnership. This concept of trust and safety is the function within a lot of platforms and technology companies that’s focused on ensuring that users have a safe experience and they feel like they have the trust needed to use a service. I was thinking about this question of trust and the definition of trust in advance of this session. And I found an article from nearly more than 22 years ago from a Purdue University professor named Josh Boyd, an article in the Journal of Computer-Mediated Communication about In Community We Trust, about online security at eBay. And eBay was known as one of the first companies to have a trust and safety team and has been very influential in this field. And I thought this article was very interesting. because it talked about trust and that for trust to exist, there also has to be risk. And that what trust does is enable action in the face of risk. And I think this is an important way of thinking about trust, not only across the role of technology companies, but also across other stakeholders. And I also came across interesting work that’s being done in the field of standardization at ISO, where there is a working group on trustworthiness, ISO working group 13, I believe it is, and a definition of trustworthiness as the ability to meet stakeholders’ expectations in a verifiable way. So we’ve already talked, and my fellow panelists have already talked a little bit about the definition of disinformation. At the Digital Trust and Safety Partnership, we have a glossary of trust and safety terms. And we also have our own definition for disinformation, false information that is spread intentionally and maliciously to create confusion, encourage distrust and potentially undermine political and social institutions. So I think about oftentimes the ultimate goal of a spreader of a piece of disinformation is to effectively DDoS the kinds of institutions, whether that is government, civil society, companies, academic institutions, by undoing the trust that individuals in society have in those institutions. And I think that this can be a particularly challenging area. There, when we were talking about misleading information, more broadly, misleading information that includes both misinformation and disinformation that we’ve discussed in this disinformation requiring understanding the intent behind this misleading content, which can oftentimes be difficult to discern even with all the tools. of the trade that Monica mentioned that companies have these days. To my mind, I think we should always be asking, when we’re talking about disinformation, disinformation to what end? What’s the objective? What is the harm to which this intentional spread of disinformation contributes? Because I think we think about different types of examples of disinformation, say foreign election interference, is very different from something like the kind of disinformation that drives scams and fraud. And these require very different approaches. So within the tech world, I mentioned that trust and safety is the function inside companies that’s responsible for doing the type of work that Monica mentioned. And a lot of this really is responding to things that we all agree is harmful and awful, whether that is child exploitation, extortion, or the kinds of scams and fraud that we’re really seeing on the rise at the moment. In other cases, this can be a challenging area where it’s hard to draw the line in terms of understanding what constitutes disinformation, what constitutes hateful speech. So at our partnership, what we want to do is increase transparency about how companies engage in trust and safety operations in a way that is not about telling companies what type of content or conduct they should allow on their product or service. Instead, at the Digital Trust and Safety Partnership, we are organized around best practices that companies can use to address all different kinds of challenges when it comes to trust and safety. And this certainly can include misleading content. of the type that we’ve been discussing. Turning to the second part of your question, Augustina, I think how the role of civil society is really essential in this space in order to help work on these challenges around disinformation. Civil society and non-governmental organizations contribute to researching and advocating for policy solutions, whether that is with governments or with companies. And I think what’s most important, civil society needs operating space to be able to do their work independent of pressure and harassment from governments, from companies, from other actors. So that to me is, I think, the single most important area where we need to give civil society the space for them to be able to do their work as a watchdog, holding both governments and industry accountable. And I do think that the World Economic Forum’s Global Coalition for Digital Safety does play an important role bringing together these kinds of stakeholders in a trusted space to increase that trust between government, between companies and civil society, and increasing transparency through the public kinds of publications that we’ve put together in the coalition and that we’ve published so that we’re transparent about the work we’re doing. So I’ll stop there and welcome questions later.


Agustina Callegari: Thank you, David. And I think I will be sharing more about some of the publications that we have done this year related to the topics that we are discussing. And as you highlighted, there are different approaches for mis- and disinformation. And also, there are different roles for different stakeholders. So now, I would like to ask a question to Said Aldehry, Director of the Centre for Future Studies at the Dubai University. and I think you are online, I can see you there. And my question to you is, how has the UAE approached this information and what lessons do we find from governance efforts in the region? So-


Saeed Al Dhaheri: Great, thank you. Thank you, Augustina. And thanks Judith for moderating this panel. I’m really very happy to be with you today remotely. I can see some of my colleagues with you in Riyadh, Dr. Houda Al-Khozaimi. But let me introduce myself. My name, like Augustina has mentioned, Saeed Al-Dahri. I’m the director for the Center of Future Studies at the University of Dubai. I’m also president of the UAE Robotics and Automation Society. And I’m a member of a civil society called eSAFE, Emirates Safer Internet Society. And I’ll speak a little bit about, you know, the role of civil society here in the UAE in terms of fighting disinformation. But I’ll be speaking about the UAE efforts to fight disinformation. So UAE adopted a multifaceted approach to disinformation. We, in 2018, as part of the digital wellbeing and happiness program in the UAE, the government has launched a digital wellbeing online platform for children, parents, and the UAE society at large to promote positive and safe digital usage. For example, supporting a good code of conduct and a good citizenship and behavior in the digital world. The council has issued a digital wellbeing policy and charter for all the citizens and residents in the UAE. And this has four components. Talking about digital footprint, what we do in the digital world, and how we can. you know, a good digital impact or digital footprint in our interactions, talks about online harm, digital ethics is part of this, and of course, cyberbullying is also part of this. The platform has managed to conduct several webinars, online sessions, community outreach, through lectures, schools, interactions, and workplace programs. And also part of this is really educating the population at large, whether talking about people at the workplace, students at schools. This is where our role as eSAFE, Emirates Safer Internet Society, kicks in, that we’ve been conducting several sessions to children at schools, and also to parents, and trying to reach people at the workplace to inform and educate about disinformation. Another thing is also, what the UAE has also, in 2021, has published, has come up with a law, the UAE law number 34, that is concerning the fight against rumours, again, here, you know, a lot of this has to relate to disinformation, and cybercrime. And Article 52 in this law aims to prevent the spread of false, malicious, or misleading information that challenges officially announced news, disturb probably public peace, threaten people, or harm public interest. And the national economy, public order, or public health. And there is a big penalty as part of this law, that could, if a person is convicted, can be prisoned for one year, or pay a penalty of about… 100,000 UAE dirhams. Also, part of what we do at the eSafe, the Emirates Safer Internet Society, is that our members are active among the boards of social media platforms. For example, there is a board here in the UAE across the region with TikTok, that one of our members is being active on that and trying to discuss how to fight disinformation with social media platforms such as TikTok. So we’ve been doing a multifaceted approach through the use of online platform as a soft power for the UAE to reach to the community at large and educate them about the good citizenship, about the good conduct and good behavior. And of course, there is a regulation which I would really love to see more of these regulations coming worldwide as lack of regulation and lack of accountability which allows those actors really to spread disinformation. And of course, the part of educating and bringing the skills to people, I believe now that it’s becoming very important that people understand about the media literacy, understand the critical thinking skills which they need to discern between when it comes to looking at information. There is a mass, like my colleagues here, Dr. Huda and everyone mentioned, a mass of disinformation and the way to fight it is by having a critical judgment from people in society to be able to tell oh, this is inauthentic or maybe this is not authentic or maybe this is a misinformation and disinformation. And I will stop here.


Agustina Callegari: Thank you, David. And I think that what you are mentioning about the importance of online literacy, of digital literacy is very aligned what I’m going to share next in terms of the work that we are doing at the Global Coalition for Digital Safety. Judith, if we are fine with the time, I’m going to give a brief introduction of the work that we are doing, and then after that, we are going to open the floor for questions and comments both on-site and online. So, I think David already mentioned some of the work that we are doing at the Global Coalition for Digital Safety at the World Economic Forum. And the coalition, which started almost two years ago, has the goal of identifying ways, creating best practices for tackling harmful content online. And being very simple, we can divide that in terms of content that is harmful but illegal in many jurisdictions, could be related to some unceasing material, but also content that is harmful but in some jurisdictions and also related to the challenges that we are having related to regulations, as it was mentioned, are illegal in many places that could be related to mis- and disinformation. We have a coalition of over 45 members from all stakeholders with the goal of working together to identify some of the challenges that we are doing, and most importantly, work together to identify and promote the solutions that we are seeing working out there. So, most of our publications are framed as showcasing what are the different efforts that our community and that the society is doing to tackle some of these challenges. And in terms of disinformation, concretely, we have been taking what we are calling a call of society approach. for this information, and we’ve been focusing a lot of our work in how media literacy plays a role in combining this information. Of course, understanding that literacy is not a silver bullet, it’s not going to solve the problems, and we have seen the complexity of the challenge with the panel, and we are seeing that there are different mitigation strategies being taken by different stakeholders, but we wanted to go deeper into how literacy can help tackling the issue, and this includes understanding how false information is produced, distributed, and consumed, and identifying the necessary skills at this stage to counter it. And we have done two things, or more than two things, I would say. But to start with, as a group, we have produced, and also very aligned to what David was saying about the glossary that they have produced, we have developed what we are calling the typology of online harms, because the first challenge that we have identified is that there is a lack of common language of what we mean, not only by mis and disinformation, and we have seen that there are some definitions that are helping us to advance the conversation that we are seeing today, and that lack of common understanding sometimes make this conversation challenging. So we have tried to make progress on that by creating this typology that defines not only mis and disinformation, but also other harms, like CSAM, CSIM, cyberbullying, and many others. And we have done that through human rights lens, because we wanted to show how the different human rights frameworks that we are seeing should be and are applied to the different conventions or principles, such as, for example, the UN Convention. on the right of child and general comment, the Convention of the Elimination of All Forms of Discrimination against Women, and the International Covenant on Economic, Social and Cultural Rights. So with this focus on fundamental rights, what we want to acknowledge is that all harm done can potentially lead to an unlawful denial or participation of freedom of expression, and these rights must be balanced against an individual’s right to be free from all harm and the right of dignity. So that’s the framework that we have taken to our work for the typology, and as I said, I think that everyone, I would say most people that are attending at IGF really believe in the power of multi-stakeholder collaboration, and the way that we work brings together all the stakeholders, the tech companies, the public officials, the civil society, international organisations to exchange best practices and coordinate actions. Again, the importance of actions and solutions is what we try to focus on, aiming at reducing online harms and, of course, mis- and disinformation. So I will stop here, so there is time for questions and comments. Judith, back to you to see if there are comments or questions on site. I’m also going to be monitoring the chat here to see if there are any reactions online. Thank you very much.


Judith Espinoza: Thank you so much, Agustina. I want to open up the floor to questions. Yeah, over here. I’m going to go ahead and pass around this microphone, and then we can work from there, yeah? So here. So if you want to say who you’re from, where you’re coming from, and your name, you can go ahead. Okay. Oops. Just a second.


Audience: Okay. Thank you, Judith. I can’t listen to my own voice, but that’s fine. Okay. Thank you, Judith. I can’t listen to my own voice, but that’s fine. Okay. Thank you, Judith. So I’m Balthazar from Indonesia, from Cyber Internet Lab. We do research on disinformation and misinformation in Southeast Asia. Sometimes, to our observation, disinformation would be more prevalent in moment of periodic crisis, be it pandemic, natural disaster, elections. And there is a high system of disinformation and low system of disinformation outside of this periodic urgency. So do you reckon that this governance efforts, or technological efforts, or digital literacy efforts should be differentiated between during the moment of periodic crisis and outside the periodic crisis? Or should we just pursue a long-term institutionalized solution that is resilient to the high tides of disinformation during certain times? Thank you.


Madan Oberoi: I totally agree with you that there are cyclical variations. And this is totally dependent on the immense opportunities which these times provide. And also, the malicious actors get more active there. But in terms of efforts to counter this, I would say the efforts would remain, the approach would remain same. The quantum of that would need to be changed according to the, for example, if there’s a political event happening, it would need to be scaled up. But in terms of change, in terms of approach, may not be the right.


Dr. Hoda A. Alkhzaimi: And I’d like to add to that very insightful, I think, remark is the fact that cybersecurity stability aspect is an opportunistic approach, both from an attacker side and a defender side. So it’s not just the seasonality amidst the crisis. It’s also the seasonality amidst different behaviors that would exist across different jurisdictions and different nonsense that would exist in the indigenous fabric of the culture. Sometimes that would be a door of opportunity to maybe magnify the impact of misinformation and disinformation at that space of geographical element. And I think it’s very important to tackle building a holistic technological solution, I would say, or a holistic approach or a holistic framework that would correct to the precision of the solution regardless of the seasonality, regardless of the locality, and be able to absorb the nonsense of that kind of situation. I’ll give you an example. I mean, in the Middle East, during the past period of time, we have, thanks to the platform efforts, we had a huge campaigns on de-escalating the impact of misinformation and disinformation. But as well, that was borderline touching censorship approaches where people felt, and journalists felt from the region that they were censored against Western journalism. And they countered that by initiating their own counter algorithms, even though they didn’t have that kind of fancy, very deep knowledge of technology. So I think we are seeing this kind of counter movement across the globe, where sometimes technology is being countered by the common community, I would say, knowledge around the culture and nonsense. So they can stop these, I would say, solutions from being adapted at mass. We have to maybe try to sit together and co-create the solutions to the right level of precision. Thank you.


Monica Gizzi: Thank you. I totally agree with my colleague. spikes, we need a sustainable and constant work around not only disinformation and misinformation, but all of the problems that were mentioned here, especially by David. However, of course, in times, in certain times, there is a spike, especially in intent to harm, and especially as people get more emotional, right? We got more emotional during COVID. We get more emotional during elections because, again, our emotions are speaking. So I can give you the example of Brazil. We just came out of a very huge municipal election this October. So every other year, we hold elections in Brazil, be them presidential or municipal. And the years of elections are years in which we have a larger number of people working not only with us to tackle the misinformation, disinformation campaigns that we see, because we need to act fast. So misinformation around, for instance, the number that people should type in to vote for a specific candidate, right? That’s something we need to act very, very, very fast, because if we take, you know, a little bit longer, that might harm and that might have a political impact in the real world. So because we need to act so much faster, we have a larger number of people working with us. And we have very close collaborations with law enforcement agencies, with the electoral authorities in the country, for instance. But let me remind you that big companies like Meta, for us, globally, this is an ongoing effort because the world is having elections all throughout the year, pretty much. And the good part about it is that we’re learning as we go. And with every election cycle, we learn where we’re not doing so great and how we can improve it for the next one.


Judith Espinoza: A hand here.


Andrew Campling: Hi, thank you. My name is Andrew Campling. Amongst other things, I’m a trustee for the Internet Watch Foundation, which is focused on countering child sex abuse material. So trust and safety efforts by platform operators are welcome, but they’re often, I’d argue, mainly insufficient, especially when platforms don’t even enforce their own terms of service effectively. People are even monetizing disinformation by leveraging the algorithms of the platforms and you’ll find reports from the Center for Countering Digital Hate and others that highlight this. So with that in mind, should civil society groups be demanding more of governments to mandate better standards to protect their citizens from harm, especially vulnerable groups, ideally with significant legal consequences for both the companies and their senior executives in the event that they don’t take effective actions? Thank you.


Judith Espinoza: For the interest of time, I’m going to ask only one of you to take the question.


Dr. Hoda A. Alkhzaimi: Thank you so much for the question. I think we have done some work with the ITU and within their POP kind of initiatives, which is for children’s safety, with the platforms as well. I totally hear what you’re saying. I think during the research that we have conducted in EmergeTech, our emerging tech lab, we have seen the impact of the different layers in terms of trying to hedge the risk of misinformation and disinformation on children and trying as well to make sure that we address the different scales of harm on the different vulnerable groups that we have. And our de facto solution is trying to advocate not just for technological solutions or policy work or regulation, but an efficacy structure, because we need to measure efficacy and measure impact of all of these solutions. So within our platform, we’re trying to measure the impact of certain type of, for example, trust erosion happened because of misinformation or because of, like what David have just mentioned, a denial of service kind of aspect that would be targeted to a specific structure. How can we help the passive actors and the active actors label? So because sometimes you need to close windows of opportunity within financial markets, it’s easy. There is a compliance structure and they have to follow the compliance structure. It’s quite rigid in terms of reporting, in terms of addressing these risks. But within the global community platform for misinformation and disinformation, this is not quite straightforward, especially when the targeted group are children and women and vulnerable groups on the platforms. So what we suggested is to tie in incentives from the board level, incentives of the CEOs, incentives of the awards of the stakeholders of those platforms, and especially appraisal of the evaluation structure of those platforms to their active participation and encountering and hedging those risks for those groups. And of course, it would take a bit of time to kind of advocate for that. We’ve seen quite a very impactful results coming out of this. I mean, it goes all with the behavioral science kind of aspects and nudging different actors rather than just funding for heavy tech solutions.


Judith Espinoza: Thank you so much. I just got a sign. We’re at the three minute mark to close the session. So I apologize to the two gentlemen in the back with additional questions. Maybe we can stay at the end and we can have a chat more informally. But so I want to take this opportunity to wrap up and also maybe address some of the last points that were made. First of all, I think Agustin is absolutely right. These are the types of conversations that we have to be having to be able to effectively tackle this information. Right. of that. We have industry, we have civil society, we have academia, we have an IO. And it takes really everyone. And the other thing is, you know, we’re all agents in this sort of new reality of the internet that we exist in, right? Saeed very aptly touched on this sort of good citizenship online. This is part of an evolving world that we’re all taking part in. And we’re not passive users or guests in that space. How we interact, how we interact with each other, how we use spaces online, all affects the sort of utility of disinformation. And very much the last gentleman’s point, there is monetization value to disinformation, right? We’re all very well pig butchering scams. And that’s literally a criminal intent to physically harm people, right? There is with emerging technology, more of a merging and sort of phasing of what physical and digital is. And those are very, very real harms and risks. And that can’t all be couched into responsibility for a platform or one singular civil society organization, right? It takes collective action from government, from platform, civil society, from academia. We need a full society approach. And this is the work that we try to do not only at the World Economic Forum, but that all of you, I think, respectively try to take, right? It’s efforts like these that inform citizens, that inform users, that make people more resilient, that make people better citizens online, that make people better users. And as David and Agustina were saying, we’re really create sort of a critical lens when engaging with material online, the same way that you would use a critical lens when you’re, I don’t know, reading a newspaper, right? The first instances of misinformation started, I probably can’t recount, but the more famous ones, at least in the United States where, you know, yellow press newspapers after the founding of, and I’m a native New Yorker, which is why the example is coming up, but of yellow press, right? People were buying newspapers because they thought it would give them favorable accreditation to business endeavors, mostly on the docks when import started. But so with that, I leave you, right? I invite you all to think about this. and be active participants in the sort of worlds that you’re in, communities that you’re engaging and creating around you. And we welcome you in these talks and these dialogues. And again, thank you so much to all our esteemed panelists. I’m going to ask them a round of applause. They’ve come a long way. And also yourselves, you’re choosing to spend the second. Thank you to our panelists online. And with that, I close. Thank you. Thank you, everyone.


M

Madan Oberoi

Speech speed

137 words per minute

Speech length

820 words

Speech time

358 seconds

Trust erosion in institutions and between individuals

Explanation

Disinformation erodes trust in institutions, between individuals, and in society as a whole. This impacts international police collaboration and the core basis of organizations like Interpol.


Evidence

Interpol’s work is based on international police collaboration, which relies heavily on trust.


Major Discussion Point

Impacts and Dangers of Disinformation


Agreed with

David Sulivan


Agreed on

Trust erosion as a major impact of disinformation


Multi-jurisdictional cooperation and legal framework harmonization

Explanation

Addressing disinformation requires cooperation across jurisdictions and harmonization of legal frameworks. This is challenging due to jurisdictional restrictions on law enforcement while criminals operate without such limitations.


Evidence

Criminals do not have jurisdictional restrictions, but law enforcement is bound by them, leading to bureaucratic procedures.


Major Discussion Point

Approaches to Combating Disinformation


Agreed with

David Sulivan


Agustina Callegari


Agreed on

Need for multi-stakeholder collaboration


Differed with

Dr. Hoda A. Alkhzaimi


Monica Gizzi


Differed on

Approach to combating disinformation


AI and blockchain for verification

Explanation

Emerging technologies like AI and blockchain can be used to combat disinformation. These technologies can help in verifying facts and improving trust in information.


Major Discussion Point

Role of Technology and Platforms


Agreed with

Dr. Hoda A. Alkhzaimi


Monica Gizzi


Agreed on

Role of technology in combating disinformation


D

Dr. Hoda A. Alkhzaimi

Speech speed

134 words per minute

Speech length

1384 words

Speech time

619 seconds

Economic impacts on financial markets and global trade

Explanation

Disinformation can have significant economic impacts on financial markets and global trade. It can lead to market volatility, devaluation of businesses, and disruption of supply chains.


Evidence

A single tweet in 2013 led to a 140-point drop in the stock market, resulting in a $136 billion loss in a single day. Stock markets lose around $39 billion annually due to disinformation.


Major Discussion Point

Impacts and Dangers of Disinformation


Need for transparent trust indicators

Explanation

There is a need for transparent trust indicators on digital platforms to help users discern the reliability of information. This requires implementing various layers of algorithmic stacking and trust models.


Evidence

Mentions of federated learning, zero knowledge proofs, and implementing trust models to improve the stack of trust models across platforms.


Major Discussion Point

Role of Technology and Platforms


Agreed with

Madan Oberoi


Monica Gizzi


Agreed on

Role of technology in combating disinformation


Differed with

Madan Oberoi


Monica Gizzi


Differed on

Approach to combating disinformation


D

David Sulivan

Speech speed

137 words per minute

Speech length

889 words

Speech time

387 seconds

Undermining of political and social institutions

Explanation

Disinformation aims to undermine trust in political and social institutions. It effectively creates a DDoS attack on institutions by eroding trust that individuals have in them.


Evidence

Definition of disinformation as false information spread intentionally to create confusion, encourage distrust, and potentially undermine political and social institutions.


Major Discussion Point

Impacts and Dangers of Disinformation


Agreed with

Madan Oberoi


Agreed on

Trust erosion as a major impact of disinformation


Civil society research and advocacy

Explanation

Civil society plays a crucial role in researching and advocating for policy solutions to combat disinformation. They need operating space to work independently and hold both governments and industry accountable.


Evidence

Mentions the importance of giving civil society space to act as a watchdog for both governments and industry.


Major Discussion Point

Approaches to Combating Disinformation


Agreed with

Madan Oberoi


Agustina Callegari


Agreed on

Need for multi-stakeholder collaboration


Balancing free speech with harm prevention

Explanation

Addressing disinformation presents challenges in balancing free speech with preventing harm. It can be difficult to draw the line between what constitutes disinformation and what is protected speech.


Major Discussion Point

Challenges in Addressing Disinformation


M

Monica Gizzi

Speech speed

133 words per minute

Speech length

1506 words

Speech time

676 seconds

Emotional manipulation of users

Explanation

Disinformation often targets people’s emotions and beliefs, making it more likely to be shared. It uses catchy phrases and headlines that appeal to people’s existing beliefs and emotions.


Evidence

Describes how misinformation often has catchy openings and phrases that speak to people’s hearts and emotions, making them eager to share with friends and family.


Major Discussion Point

Impacts and Dangers of Disinformation


Algorithmic detection and fact-checking partnerships

Explanation

Platforms use AI algorithms to detect potential misinformation and partner with fact-checking agencies to verify content. This helps in reducing the spread of false information on social media platforms.


Evidence

Meta works with over 100 fact-checking agencies globally, covering most languages in jurisdictions where they offer services. In Brazil alone, they work with six different fact-checking agencies.


Major Discussion Point

Approaches to Combating Disinformation


Agreed with

Madan Oberoi


Dr. Hoda A. Alkhzaimi


Agreed on

Role of technology in combating disinformation


Differed with

Madan Oberoi


Dr. Hoda A. Alkhzaimi


Differed on

Approach to combating disinformation


Difficulty distinguishing misinformation from disinformation

Explanation

It can be challenging to differentiate between misinformation (unintentional spread of false information) and disinformation (intentional spread with malicious intent). This complicates efforts to address the issue effectively.


Evidence

Explains the difference between disinformation (with intent to harm) and misinformation (without intent to harm), and how both need to be tackled by platforms.


Major Discussion Point

Challenges in Addressing Disinformation


Platform tools to add friction to false content

Explanation

Social media platforms implement tools to add friction to the spread of false content. This includes labeling content as false, adding filters, and providing pop-up warnings before sharing.


Evidence

Describes the process of labeling false content, adding filters similar to graphic content warnings, and providing pop-up warnings when users attempt to share content flagged as false.


Major Discussion Point

Role of Technology and Platforms


S

Saeed Al Dhaheri

Speech speed

126 words per minute

Speech length

711 words

Speech time

336 seconds

Digital literacy and education initiatives

Explanation

The UAE has implemented digital literacy and education initiatives to combat disinformation. These efforts focus on promoting positive and safe digital usage and educating the population about online risks.


Evidence

Mentions the launch of a digital wellbeing online platform in 2018 and the efforts of the Emirates Safer Internet Society in conducting educational sessions for children, parents, and workplaces.


Major Discussion Point

Approaches to Combating Disinformation


Multifaceted government approach with regulations and platforms

Explanation

The UAE has adopted a multifaceted approach to combat disinformation, including regulations and online platforms. This includes laws against spreading false information and initiatives to promote digital wellbeing.


Evidence

Cites the UAE law number 34 from 2021 that aims to prevent the spread of false, malicious, or misleading information, with penalties for violations.


Major Discussion Point

Approaches to Combating Disinformation


A

Audience

Speech speed

139 words per minute

Speech length

142 words

Speech time

61 seconds

Increased prevalence during crises and events

Explanation

Disinformation tends to be more prevalent during periods of crisis such as pandemics, natural disasters, and elections. This raises questions about whether efforts to combat disinformation should be differentiated between crisis periods and normal times.


Evidence

Observation from research conducted in Southeast Asia showing higher levels of disinformation during periodic crises.


Major Discussion Point

Challenges in Addressing Disinformation


A

Andrew Campling

Speech speed

130 words per minute

Speech length

136 words

Speech time

62 seconds

Harm to vulnerable groups like children

Explanation

Disinformation can particularly harm vulnerable groups, including children. Current trust and safety efforts by platforms are often insufficient to protect these groups effectively.


Evidence

Reference to his role as a trustee for the Internet Watch Foundation, which focuses on countering child sex abuse material.


Major Discussion Point

Impacts and Dangers of Disinformation


Monetization of disinformation on platforms

Explanation

People are monetizing disinformation by leveraging platform algorithms. This raises concerns about the effectiveness of current platform policies and the need for stronger regulations.


Evidence

Mentions reports from the Center for Countering Digital Hate highlighting this issue.


Major Discussion Point

Challenges in Addressing Disinformation


Importance of platform accountability

Explanation

There is a need for greater accountability from platforms in enforcing their own terms of service and protecting users from harm. This may require government mandates and legal consequences for companies and executives.


Major Discussion Point

Role of Technology and Platforms


A

Agustina Callegari

Speech speed

144 words per minute

Speech length

1086 words

Speech time

449 seconds

Multi-stakeholder collaboration on solutions

Explanation

Addressing disinformation requires collaboration between multiple stakeholders including tech companies, public officials, civil society, and international organizations. This approach allows for the exchange of best practices and coordinated actions.


Evidence

Describes the work of the Global Coalition for Digital Safety at the World Economic Forum, which brings together over 45 members from various stakeholder groups.


Major Discussion Point

Role of Technology and Platforms


Agreed with

Madan Oberoi


David Sulivan


Agreed on

Need for multi-stakeholder collaboration


Agreements

Agreement Points

Trust erosion as a major impact of disinformation

speakers

Madan Oberoi


David Sulivan


arguments

Trust erosion in institutions and between individuals


Undermining of political and social institutions


summary

Both speakers emphasize how disinformation erodes trust in institutions and between individuals, undermining the foundations of society and international cooperation.


Need for multi-stakeholder collaboration

speakers

Madan Oberoi


David Sulivan


Agustina Callegari


arguments

Multi-jurisdictional cooperation and legal framework harmonization


Civil society research and advocacy


Multi-stakeholder collaboration on solutions


summary

The speakers agree that addressing disinformation requires cooperation across jurisdictions, sectors, and stakeholders, including government, industry, and civil society.


Role of technology in combating disinformation

speakers

Madan Oberoi


Dr. Hoda A. Alkhzaimi


Monica Gizzi


arguments

AI and blockchain for verification


Need for transparent trust indicators


Algorithmic detection and fact-checking partnerships


summary

The speakers highlight the importance of leveraging technology, such as AI, blockchain, and algorithmic detection, to combat disinformation and improve trust in information.


Similar Viewpoints

Both speakers emphasize the importance of educating users to recognize and resist emotional manipulation in disinformation, with Monica highlighting the emotional nature of disinformation and Saeed discussing UAE’s digital literacy initiatives.

speakers

Monica Gizzi


Saeed Al Dhaheri


arguments

Emotional manipulation of users


Digital literacy and education initiatives


Both speakers point out the economic implications of disinformation, with Dr. Alkhzaimi focusing on broader market impacts and Campling highlighting the monetization of disinformation on platforms.

speakers

Dr. Hoda A. Alkhzaimi


Andrew Campling


arguments

Economic impacts on financial markets and global trade


Monetization of disinformation on platforms


Unexpected Consensus

Balancing free speech with harm prevention

speakers

David Sulivan


Monica Gizzi


arguments

Balancing free speech with harm prevention


Difficulty distinguishing misinformation from disinformation


explanation

Despite representing different sectors (civil society and industry), both speakers acknowledge the challenges in addressing disinformation while preserving free speech, highlighting the complexity of the issue beyond simple platform policies.


Overall Assessment

Summary

The main areas of agreement include the erosion of trust as a key impact of disinformation, the need for multi-stakeholder collaboration, and the role of technology in combating disinformation. There is also consensus on the importance of digital literacy and the economic implications of disinformation.


Consensus level

There is a moderate to high level of consensus among the speakers on the fundamental challenges and approaches to combating disinformation. This consensus suggests a shared understanding of the problem’s complexity and the need for collaborative, multi-faceted solutions. However, the specific implementation of these solutions may still require further discussion and negotiation among stakeholders.


Differences

Different Viewpoints

Approach to combating disinformation

speakers

Madan Oberoi


Dr. Hoda A. Alkhzaimi


Monica Gizzi


arguments

Multi-jurisdictional cooperation and legal framework harmonization


Need for transparent trust indicators


Algorithmic detection and fact-checking partnerships


summary

Speakers proposed different primary approaches to combat disinformation, ranging from legal harmonization to technological solutions and platform-based fact-checking.


Unexpected Differences

Economic impact of disinformation

speakers

Dr. Hoda A. Alkhzaimi


Other speakers


arguments

Economic impacts on financial markets and global trade


explanation

Dr. Alkhzaimi’s focus on the significant economic impacts of disinformation was not echoed by other speakers, who primarily discussed social and political implications. This unexpected emphasis highlights an often overlooked aspect of the disinformation problem.


Overall Assessment

summary

The main areas of disagreement centered around the primary approaches to combating disinformation, the role of different stakeholders, and the emphasis on various impacts of disinformation.


difference_level

The level of disagreement among speakers was moderate. While there was general consensus on the dangers of disinformation, speakers differed in their emphasis on solutions and impacts. These differences reflect the complex, multifaceted nature of the disinformation problem and suggest that a comprehensive approach involving multiple stakeholders and strategies may be necessary.


Partial Agreements

Partial Agreements

Both speakers agree on the need for greater accountability in addressing disinformation, but differ on the primary actors responsible – David emphasizes civil society’s role, while Andrew focuses on platform accountability and government mandates.

speakers

David Sulivan


Andrew Campling


arguments

Civil society research and advocacy


Importance of platform accountability


Similar Viewpoints

Both speakers emphasize the importance of educating users to recognize and resist emotional manipulation in disinformation, with Monica highlighting the emotional nature of disinformation and Saeed discussing UAE’s digital literacy initiatives.

speakers

Monica Gizzi


Saeed Al Dhaheri


arguments

Emotional manipulation of users


Digital literacy and education initiatives


Both speakers point out the economic implications of disinformation, with Dr. Alkhzaimi focusing on broader market impacts and Campling highlighting the monetization of disinformation on platforms.

speakers

Dr. Hoda A. Alkhzaimi


Andrew Campling


arguments

Economic impacts on financial markets and global trade


Monetization of disinformation on platforms


Takeaways

Key Takeaways

Disinformation poses significant threats to trust in institutions, economic stability, and social cohesion


Combating disinformation requires a multi-stakeholder approach involving governments, platforms, civil society, and users


Technology like AI can be used to both spread and combat disinformation


Digital literacy and critical thinking skills are crucial for users to identify misinformation


There is a need to balance free speech protections with preventing harm from disinformation


Resolutions and Action Items

Continue multi-stakeholder collaboration through forums like the Global Coalition for Digital Safety


Platforms to invest in AI detection and fact-checking partnerships


Governments to consider regulations and legal frameworks to address disinformation


Promote digital literacy and ‘good digital citizenship’ education initiatives


Unresolved Issues

How to effectively distinguish between misinformation and disinformation


Addressing the cross-border nature of online disinformation


Determining appropriate levels of platform accountability and regulation


How to combat the monetization of disinformation on platforms


Suggested Compromises

Balancing free speech protections with content moderation to prevent harm


Using ‘friction’ on potentially false content rather than outright removal


Involving multiple stakeholders in developing solutions rather than top-down approaches


Thought Provoking Comments

Disinformation by definition is a false and misleading and manipulated synthetic information, which is created for the purpose of cheating, for the purpose of harming, for the purpose of wrongfully influencing the opinion. So the very intent is wrong.

speaker

Madan Oberoi


reason

This comment provides a clear definition of disinformation that emphasizes the intentional nature of the harm, setting the stage for the rest of the discussion.


impact

It established a foundation for distinguishing between misinformation and disinformation, which was referenced throughout the conversation by other speakers.


In 2013, I think there was a tweet about a harm that’s being initiated in the White House that led to 140 points drop in the stock market. This 140 points led to $136 billion loss in a single day within minutes.

speaker

Dr. Hoda A. Alkhzaimi


reason

This concrete example illustrates the real-world economic impact of disinformation, making the abstract concept more tangible.


impact

It shifted the conversation to focus on the economic consequences of disinformation, leading to a deeper discussion on the implications for global markets and financial systems.


We work with over 100 fact-checking agencies across the globe. They’re covering pretty much every single language of the jurisdictions in which we offer our services.

speaker

Monica Gizzi


reason

This comment provides insight into the scale and complexity of efforts to combat misinformation on global platforms.


impact

It introduced the topic of practical measures being taken by tech companies, leading to a discussion on the role of technology and human intervention in addressing disinformation.


At our partnership, what we want to do is increase transparency about how companies engage in trust and safety operations in a way that is not about telling companies what type of content or conduct they should allow on their product or service.

speaker

David Sullivan


reason

This comment highlights the importance of transparency in trust and safety operations, while also acknowledging the complexity of content moderation.


impact

It shifted the conversation towards the role of civil society and the importance of collaboration between different stakeholders in addressing disinformation.


UAE adopted a multifaceted approach to disinformation. We, in 2018, as part of the digital wellbeing and happiness program in the UAE, the government has launched a digital wellbeing online platform for children, parents, and the UAE society at large to promote positive and safe digital usage.

speaker

Saeed Al Dhaheri


reason

This comment provides a concrete example of a national approach to digital literacy and combating disinformation.


impact

It broadened the discussion to include government initiatives and the importance of digital literacy in combating disinformation.


Overall Assessment

These key comments shaped the discussion by providing a comprehensive view of the disinformation landscape, from its definition and economic impact to practical measures being taken by various stakeholders. The conversation evolved from theoretical concepts to real-world examples and solutions, emphasizing the need for a multi-stakeholder approach involving tech companies, governments, civil society, and individual users. The discussion highlighted the complexity of the issue and the importance of balancing content moderation with transparency and user empowerment through digital literacy.


Follow-up Questions

How can we use data to safeguard stock markets from the impacts of misinformation?

speaker

Dr. Hoda A. Alkhzaimi


explanation

This is important to protect financial markets and reduce economic losses caused by misinformation.


How can we incorporate trust indicators transparently across platforms?

speaker

Dr. Hoda A. Alkhzaimi


explanation

This is crucial for helping users determine the trustworthiness of information they encounter online.


How can we improve attribution and bring consequences to those responsible for disinformation campaigns?

speaker

Madan Oberoi


explanation

This is important for deterring the spread of disinformation and holding bad actors accountable.


How can blockchain technology be used for verifying facts to combat disinformation?

speaker

Madan Oberoi


explanation

This explores potential technological solutions to enhance fact-checking and verification processes.


Should governance efforts, technological efforts, or digital literacy efforts be differentiated between moments of periodic crisis and outside periodic crisis?

speaker

Balthazar (audience member)


explanation

This is important for developing effective strategies to combat disinformation during both high-risk periods and normal times.


Should civil society groups be demanding more of governments to mandate better standards to protect citizens from harm, especially vulnerable groups?

speaker

Andrew Campling (audience member)


explanation

This addresses the potential need for stronger regulations and legal consequences to ensure platforms take effective action against disinformation and other online harms.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.