High-Level Session 1: Navigating the Misinformation Maze: Strategic Cooperation For A Trusted Digital Future

15 Dec 2024 07:10h - 08:10h

High-Level Session 1: Navigating the Misinformation Maze: Strategic Cooperation For A Trusted Digital Future

Session at a Glance

Summary

This panel discussion focused on the challenges of misinformation and disinformation in the digital age, particularly in light of emerging technologies like AI. Participants from government, international organizations, and the private sector explored the sources, impacts, and potential solutions to combat false information online.

Key points included the rapid spread of misinformation through social media platforms and messaging apps, which has been exacerbated by AI tools that can create highly convincing fake content. Panelists noted that while misinformation has always existed, its reach and speed have increased dramatically in the digital era. They emphasized that not all misinformation is equally harmful, but some can have serious consequences, such as during the COVID-19 pandemic.

The discussion highlighted the need for a multi-stakeholder approach involving governments, tech companies, civil society, and international organizations to address this complex issue. Suggestions included developing AI-powered detection tools, implementing content moderation practices, and promoting digital literacy. However, panelists also stressed the importance of balancing efforts to combat misinformation with protecting freedom of expression and avoiding censorship.

Several speakers mentioned ongoing initiatives and regulations, such as the EU’s Digital Services Act, as potential models for addressing misinformation. The importance of international cooperation and forums like the Internet Governance Forum for sharing best practices was emphasized.

The panel concluded by acknowledging the challenges of the rapidly evolving digital landscape and the need for flexible, innovative approaches to combat misinformation while preserving an open internet and freedom of speech. Participants agreed that supporting accurate, quality information is crucial in countering the root causes of misinformation.

Keypoints

Major discussion points:

– The rapid spread and increasing sophistication of misinformation, especially through social media and AI technologies

– The need for collaboration between governments, tech companies, civil society and other stakeholders to combat misinformation

– Balancing freedom of expression with protecting users from harmful misinformation

– The importance of digital literacy and critical thinking skills to build societal resilience

– Developing effective regulations and technologies to detect and mitigate misinformation

Overall purpose:

The goal of this panel discussion was to examine the current landscape of digital misinformation, explore strategies and technologies to combat it, and discuss how different stakeholders can work together to address this complex challenge.

Tone:

The overall tone was serious and concerned about the threats posed by misinformation, but also constructive in proposing solutions. Panelists spoke with urgency about the need to act, while also emphasizing the importance of careful, balanced approaches that preserve freedom of expression. The tone remained consistent throughout, with panelists building on each other’s points collaboratively.

Speakers

– Barbara Carfagna: Italian journalist, moderator of the panel

– Deemah Al-Yahya: Secretary General of the Digital Cooperation Organization

– Khaled Mansour: Member of Meta Oversight Board

– Esam Alwagait: Director of the National Information Center Saudi Data and AI Authority

– Natalia Gherman: Assistant Secretary General, Executive Director of the UN Counter-Terrorism Committee Executive Directorate

– Mohammed Ali Al-Qaed: Chief Executive, Information and Government Authority, Kingdom of Bahrain

– Pearse O’Donohue: Director for the Future Networks Directorate of DigiConnect European Commission

Full session report

Expanded Summary: Combating Misinformation in the Digital Age

This panel discussion, moderated by Italian journalist Barbara Carfagna, brought together experts from government, international organisations, and the private sector to explore the challenges of misinformation and disinformation in the digital age. The participants examined the sources, impacts, and potential solutions to combat false information online, with a particular focus on the role of emerging technologies like artificial intelligence (AI).

Sources and Spread of Misinformation

The panellists agreed that social media platforms have become the primary conduits for the rapid spread of misinformation. Esam Alwagait, Director of the National Information Center Saudi Data and AI Authority, identified these platforms as the main source of misinformation spread. Natalia Gherman, Assistant Secretary General and Executive Director of the UN Counter-Terrorism Committee Executive Directorate, highlighted that unmoderated online spaces are major hubs for misinformation and terrorist content. Mohammed Ali Al-Qaed, Chief Executive of the Information and Government Authority in Bahrain, noted that social media algorithms often promote sensational content, exacerbating the problem.

The discussion emphasised that while misinformation has always existed, its reach and speed have increased dramatically in the digital era. Gherman pointed out that influencers with large followings can rapidly spread misinformation, while Khaled Mansour, Member of Meta Oversight Board, observed that a lack of access to accurate information contributes to the spread of false narratives. Specific examples of misinformation during the COVID-19 pandemic and elections were mentioned to illustrate the real-world impact of this issue.

Mansour made a particularly thought-provoking comment, stating, “Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous.” This remark underscored the real-world consequences of misinformation beyond online discourse, shifting the conversation to focus on its potential for violence and harm.

Technological Solutions and Challenges

The panel explored how technology, particularly AI, can be both a source of and a solution to misinformation. Alwagait discussed how AI and machine learning tools can detect manipulated content, including the use of natural language processing to analyze linguistic patterns. Al-Qaed mentioned existing fact-checking and content verification tools, though he noted these often require user effort.

Specific technologies discussed for combating misinformation included:

– AI-driven fact-checking tools that could flag alarming content automatically

– Machine learning algorithms to analyze linguistic patterns and identify potential misinformation

– Tools to analyze video and audio content to detect manipulated media

– Crowdsourced flagging systems to leverage user input in identifying false information

Al-Qaed proposed the concept of “verify-by-design” tools that could tag information at its source, potentially providing users with immediate context about the reliability of content. This approach could offer a proactive solution to misinformation detection.

However, the discussion also acknowledged that AI technologies pose risks in generating more sophisticated fake content, highlighting the ongoing arms race between misinformation creation and detection.

Regulatory Approaches and Challenges

The panel agreed on the need for innovative regulations to combat misinformation while enabling innovation. Pearse O’Donohue, Director for the Future Networks Directorate of DigiConnect European Commission, pointed to the EU’s Digital Services Act as a potential model for other countries. He argued that regulations should put the onus on platforms to moderate content, suggesting a more active approach to content removal.

However, this view was not universally shared. Mansour advocated for a more nuanced approach, arguing that not all misinformation is harmful and that removal is not always the best solution. He suggested that labelling manipulated content can inform users without removing it entirely, preserving freedom of expression.

The discussion highlighted the challenges of regulating smaller platforms and encrypted messaging apps, which often lack the resources or infrastructure for effective content moderation. The panel also debated the merits of content removal versus labeling, considering the potential impacts on free speech and user autonomy.

O’Donohue raised an important question about the challenges of regulation, asking, “If a government or a regulatory authority decides to step in and decide on what is misinformation and what is not, well then who moderates the regulator?” This comment led to a more nuanced discussion about the balance between regulation and freedom of expression.

Multi-stakeholder Cooperation and Global Frameworks

There was broad consensus among the panellists on the need for collaboration between governments, tech companies, academia, and civil society to address misinformation effectively. Deemah Al-Yahya, Secretary General of the Digital Cooperation Organization, emphasised this multi-stakeholder approach.

Gherman highlighted the importance of international forums like the Internet Governance Forum (IGF) in enabling stakeholders to develop unified strategies. She also stressed the need for global cooperation and frameworks to address misinformation on a broader scale.

The panel discussed the merits of global versus regional approaches to regulation. Al-Qaed suggested that regional cooperation could give smaller countries more influence when dealing with tech companies, potentially leading to more effective solutions. There was also a call for more focused global and regional events to develop unified strategies against misinformation.

Balancing Free Speech and User Protection

A recurring theme throughout the discussion was the need to balance efforts to combat misinformation with protecting freedom of expression and avoiding censorship. Mansour emphasised that misinformation policies must balance protecting users and preserving free speech. He argued for solutions grounded in human rights principles, a point echoed by Gherman.

The panel discussed potential unintended consequences of countering misinformation, such as impacts on human rights and freedom of expression. They stressed the importance of considering these factors when developing policies and technologies to address false information.

O’Donohue suggested a nuanced approach, stating, “Not all misinformation can be classified as bad, and that we therefore need a gradual response. And of course, we preserve our most direct and intrusive measures for those content, which is clearly supporting terrorist, criminal, or other dangerous philosophies or content.” This comment encouraged a more refined approach to addressing misinformation, moving away from blanket solutions towards more targeted, context-specific strategies.

Promoting Accurate Information and Digital Literacy

The panel agreed that supporting accurate, credible information is crucial in countering misinformation. Mansour particularly emphasised this point, suggesting that promoting good information could be an effective strategy to counter misinformation at its root.

The importance of digital literacy and critical thinking skills in building societal resilience against misinformation was also discussed. While specific strategies for improving these skills were not elaborated upon, the panel recognized their crucial role in empowering users to navigate the complex information landscape.

Conclusion

The panel concluded by acknowledging the challenges of the rapidly evolving digital landscape and the need for flexible, innovative approaches to combat misinformation while preserving an open internet and freedom of speech. While there was general agreement on the severity of the problem and the need for multi-stakeholder collaboration, differences emerged in the specific strategies and priorities for addressing misinformation.

The discussion highlighted several unresolved issues, including how to effectively regulate smaller, unmoderated online spaces, address misinformation in encrypted messaging apps and private groups, and develop a standardised approach to defining and classifying harmful versus non-harmful misinformation.

Moving forward, the panellists suggested developing innovative regulations, creating public-private partnerships, implementing AI-driven fact-checking tools, establishing ‘verify-by-design’ mechanisms, and organising more focused global and regional events to develop unified strategies against misinformation. The overall tone of the discussion was serious yet constructive, emphasising the urgency of addressing misinformation while also calling for careful, balanced approaches that preserve fundamental rights and freedoms.

Session Transcript

Barbara Carfagna: Hello, everybody. I’m proud to be here. I’m an Italian journalist. I’m proud to be here to open this important session for keep our peace in society. Chatbot and Deepfake are transforming cyber threats from spreading disinformation to enhancing terrorist capabilities and as governments and tech companies struggle to keep up propaganda, disinformation, recruitment and operational planning. In the 80s, cyber threats were about hacking into computers to extract information. Then it escalated. In the next following years, you could get into a computer, change the software and cause damage to physical system controlled by it. But now the cyberspace has become a tool not to get information or cause physical harm, but influence public opinion. Terrorists and criminals are exploiting AI tools while warning of the unpreparedness of regulatory bodies, tech companies and law enforcement but to address this emerging threats. Generative AI is easy to use. You can create fake news, but this is an old world compared to Deepfake. We are used to thought that represents something existing, but for the first time in this moment we can produce photos of something that doesn’t correspond to anything true. There are some filters in the system, but jailbreaking is a technique that can trick AI filters and it’s very easy to use as well. Changing the prompt is one of the techniques, so we need stronger safeguards. How to fight against this? This is the purpose of our panel. Focus on content and explaining that it’s fake doesn’t work anymore. An organized campaign with bots and influencers spread the message to millions of people in ten seconds. So focus on the content is not more so useful as it would be in the first phase. One solution is focus on the behavior of the message in real time. This is one of the challenges. Through tools that analyze the spread in social networks. But with our panelists we will have a complete overview from rules, technological side and cultural and educational side. So I welcome our panelists starting from Esam Alwagait, Director of the National Information Center Saudi Data and AI Authority, welcome. Then, Mrs. Deemah Alyahya, Secretary General of the Digital Cooperation Organization, welcome. Her Excellency, Mr. Mohammed Ali Al-Qaed, Chief Executive, Information and Government Authority, Kingdom of Bahrain, welcome. Assistant Secretary General, Mrs. Natalia Gherman, Executive Director of the UN Counterterrorism Committee, Executive Directorate, welcome. Mr. Pearse O’Donohue, Director for the Future Networks Directorate of DigiConnect European Commission, welcome. Mr. Khaled Mansour, Member of Meta Oversight Board, welcome. Mr. Pearse O’Donohue, Director for the Future Networks Directorate of DigiConnect European Commission, welcome. Starting from Mrs. Dima Al-Yahya, what are the most prevalent digital sources contributing to the spread of information today and how has the landscape evolved in the recent years? Starting from Mrs. Deemah Al-Yahya, what are the most prevalent digital sources contributing to the spread of information today and how has the landscape evolved in the recent years? Maybe we can use that mic? I can’t hear. You see, you didn’t read me. Let’s talk. Thank you. Oh, yes.

Deemah Al-Yahya: Well, thank you very much, and it’s great to start a morning with a subject so important and so profound at this point of time, with the increase of not only us celebrating new innovations and the progression of emerging technologies, but also looking at how can we safeguard the use of the internet. We’ve seen that the internet has opened amazing opportunities for prosperity of humans and people, either by increasing productivity to increasing the quality of life. And we do see that the platforms as well, like social media platforms, have been tools to help in finding jobs, as well as education. But then we come to a very big issue, which is the harmful part of the internet and using social media platforms, which is the misinformation and harmful use of information. What is very much alarming as well is that for us to benefit more from AI, for instance, these algorithms are built on existing data online. And that data, if it’s false and fake news, that is a very big challenge in terms of the resources for AI. And this is why we truly see that such issue, when you ask me that question, you cannot pinpoint one independent institution or entity or a person that is responsible for such. It is a collective responsibility from governments to private sector and the innovators to human capital as well and the civil society. And therefore, we have to look at this challenge with a collective eye and a united force for collaboration. And this is why in the DCO, what we have have created is a facilitation of bringing in the governments with the innovators and the civil society to think together, to co-create and co-design initiatives and the way forward to reduce such kind of issue. Thank you.

Barbara Carfagna: Mr. Khaled Mansour, I ask you the same question. So which are the sources contributed to the spread of misinformation today?

Khaled Mansour: Thank you very much. Sabah al-kheir. As-salamu alaykum. Don’t worry, don’t run to your translators. I will speak in English. It’s a week from today that the regime of Bashar al-Assad has fallen down in Syria. And with it came a flood of people coming out from prisons and a flood of images on social media, jubilation, happiness, mothers embracing their sons and daughters in many cases. But in parallel, there was also a flood of lies, rumors, and misinformation, what we call it. And many of our friends, colleagues, politicians, and journalists have swallowed this in a very gullible manner because the main sources of digital spread of misinformation are as old as humanity. Since people started to communicate, there have been lies, there have been ignorance, deception, willful and non-willful. There has been self-interest, exaggeration, biases, et cetera, et cetera. What has changed in the last 15 to 20 years is the exacerbation and the acceleration of this trend. Access. All of us are glued to our smartphones from Bangladesh to Mexico. The flood. I mean, there’s a flood of information all the time. All of us wake up in the morning and first thing we scroll. And the last thing before we sleep, we scroll. Meta alone, on whose oversight board I serve, has 3 to 4 billion users. This means 3 to 4 billion pieces of content immediately. I mean, everything is immediate. Long time ago, in this country, when you write a bad poem and you are bad-mouthing somebody, a month later, he will write a poem in return. Or in the New York Times, if you write a bad op-ed, a week later, there will be an op-ed. Now it’s immediate. There is immediacy, and people don’t wait to check and make sure whether the information they are using are misinformation or not. And technology, as the speaker before me speak, the AI technology makes it even much easier to make misinformation look far more believable. And finally, coordinated campaigns by governments, by corporations, for marketing, and for insidious and nefarious purposes. So it is important to know what is the source of misinformation if we need to address it and address it well. And I think all of us, in various ways, individuals, governments, corporations, are implicated into this. And the main victim, one of the main victims that we don’t really speak about, it’s not that we are deceived by misinformation. Over time, the worse and deeper effect is undermining public trust in information we receive from the media or from social media platforms. Everything appears and becomes fake news. Misinformation, I mean, all of us know, was catapulted into our debates, public debates, et cetera, et cetera, in 2016 in the US, because of the elections in the US. And then for all of us, because of COVID and that high level of misinformation we all had. There was harm, perceived to be harm to the elections, as well as harm. But there’s another very important concern that we pay, we don’t pay attention to, which is the lack of accurate information and good media all around since then. While we are trying to fight misinformation, I’m trying to conclude here, we have to do this by avoiding censorship in repressive environments while avoiding exacerbating violence because I started by the flood of happy images from Syria and the flood of misinformation. Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous. It’s very important to think that one of the very reasons of the spread of misinformation is lack of access to accurate information, what is currently called information integrity. Information integrity is in trouble and accurate information from credible media sources have declined or faced tough times due to economic reasons. Others also have mounting challenge to analyze, dissect the flood that we already spoke about because there’s a persistent need to cultivate the critical ability. All my good friends, academics and doctors and politicians who believe a lot of the stuff that’s coming from Syria, that’s misinformation, they actually also need to cultivate their ability at reading, understanding and analyzing media much better. Thank you.

Barbara Carfagna: Okay. Thank you. So, what we see, and as I told you in the introduction, is something like if the attacks that before were on infrastructures, now we have in our minds. So it’s very important to understand this, that this is sort of a war field. of battle, this new situation where we are getting in after generative AI introduced this technique does so fast, so fast, can’t make a reason meant on them. So, Mr. Esam Alwagait I’ll get, I ask you the same question about the most prevalent prevalent digital sources.

Esam Alwagait: Sure. So when we talk about misinformation, we have to understand two things. First of all, how fast it propagates. And second of all, the coverage or the reach of this misinformation, because let’s face it, misinformation has always been there. But before it used to be within a limited reach, and it used to propagate much, much, much slower. Nowadays, with the internet, with the social media, misinformation propagates much faster, and almost everywhere. So let me give you some stats by a report or by survey by the UNESCO. 68% of the people say that they get their news from social media. 38% they get it from online messaging. And this is a shocking fact, only 20% they get it from the online media. So it’s obvious that the main source for propagating misinformation is the social media. So what is misinformation or what are the forms of misinformation? I would like to classify it in two parts. The first one is the intentional misinformation, where you have people or groups trying to actively trying to push misinformation. And nowadays, misinformation is not only text as it used to be before. Nowadays, with the help of AI, and there are a lot of tools, you can generate video and audio. And it is so realistic that the average person cannot realize that this is actually fake. So with the combination of AI tools and the social media, we have very dangerous ways of spreading misinformation. And the information integrity is more important than ever it is today. So if you ask me what is the way to spread misinformation, I would say that the main source of spreading misinformation is the social media and online messaging apps.

Barbara Carfagna: Yes. We have seen that now with generative AI, we can build also vertical generative AI. And this vertical generative AI, if you if you go to profile someone, before we could through social media, profile groups, now with generative AI, we can profile one person exactly, exactly with their tastes, and their orientation and desires, and also vulnerability. So it’s like if we have a precision weapon, and capability to build very fast, through a bot, for example, and information and fake information for him. So it’s even more and more convincing. And it is what the experts call super persuasion. You cannot persuade someone like we could do before, but super persuasion. is just for him. So this is, of course, very important for terrorism, for recruitment of terrorists, and that’s why I ask the question to Natalia Gherman, so which are the most prevalent digital sources?

Natalia Gherman: Thank you, and good morning, ladies and gentlemen. Great pleasure to be here, and just as Madam Moderator mentioned, I represent a United Nations Security Council counterterrorism entity, and my office is more focused on the spread of terrorist and violent extremist messaging online and offline. However, there are great similarities on how misinformation, as well as disinformation, and terrorist-related messaging are being created, posted, and propagated. And in terms of changing landscape, I should say that COVID-19 pandemic led, of course, to a massive rise in people joining the cyber space, and the sheer numbers of people using Internet and social media now is staggering. And we’ve also seen an explosion in the number of gaming and social media platforms, messaging systems, and online spaces. So in terms of malicious content online, we in the United Nations highlight that unmoderated spaces are major hubs for misinformation and terrorist content. And these are, first of all, social media platforms and messaging systems with deliberately lax content policies. Then there are, of course, small platforms lacking capacity. to effectively moderate content and hidden chat rooms and sites. And I also want to draw your attention to the rise of influencers with millions of followers. And when combined with algorithms pushing content, they flooded the social media and messaging services with misinformation and worse content. So we are in a time when just a handful of people can seed widespread misinformation. And two trends we in a seated counter-terrorism executive directorate have noted while assessing the member states’ capacities to prevent and counter terrorism. And they are ironically at the opposite ends of the technology spectrum. On the one hand, in dialogue with the member states, we have seen the ever-increasing use of new technologies like chatbots, generative AI and other AI-powered tools to generate and spread terrorist-related content and other malicious messaging. And this has, by the way, led to the creation of credible avatars and also deepfake video and audio used for criminal purposes and for the spread of misinformation and also to incite violence. And on the other hand, we have seen an uptick in so-called old-fashioned technologies like the use of terrorist-operated websites and human support networks to help spread messages to followers across diverse platforms. And these methods rely on hiding content in hard-to-find channels very often and also delivering it to selected audiences. But in both cases, detection, tracking, and countering the threat spread of harmful content is posing to the governments, to the member states, and to all professionals ever-increasing challenges. Thank you.

Barbara Carfagna: Thank you. Mr. Mohammed Ali Al-Qaed, same question, on the most prevalent digital sources.

Mohammed Ali Al-Qaed: It’s a great pleasure to be here on this fantastic event. And I would like to congratulate the Digital Government Authority for organizing this and for inviting me to be part of this distinguished panelist. Of course, the technological advancements, being the infrastructure or the connectivity on social media platforms, and the speed of spreading information change the behavior of the society. And many people, instead of going for the information in the traditional media, they use the new channels. I think it’s due to the fact that the behavior of people, they cannot wait. They want instant news to be there. I recall once there was a news which was aired. And somebody called me and said, you don’t know what’s going on. And I said, what was there? He said, you know, this and this happened. And then I told them. And when I looked at it, it was aired maybe 30 minutes ago. So I was late for 30 minutes gap. That, I think, and the reaction of the traditional media and the governments to this change of the behavior and the way that the people are dealing with the news, I think, created this kind of a new way of spreading the misinformation. Because everybody became a producer. And the new creators, being social media activists or many others, became the new producers of the information. The technologies. you know Attributed into that for example they’re good algorithms used by the social media that you know make the information which is more Clickable or more sensational to come on top I think that’s maybe one one reason for the for the misinformation and the other thing is the You know the more that the people try to distinguish between the fake and and the true information Deep fake and AI you know create more sophisticated content and of course the the encryptions of the Social media Platforms make it more difficult because you don’t know you cannot control and who is who is spreading what kind of information is between people themselves the societal and psychological side of it of course that you know many people whenever they Receive an information that’s for them is absolute truth, and then it’s very difficult for you to Try to you know navigate it negate it or you know take the So it’s easier to to convince, but it’s more difficult to Change that minds and of course when the messages comes from your friends and from your network again It’s more credible for you, but that’s that’s another Reason and then the fear and emotions usually spread more often and Economical side that you know the Anything that spreads more and more clickable people will will try to spread that information to get more followers and Sometimes to monetize that that content so that’s I think in and in a summary I think what happened To change the behaviors of the of the society.

Barbara Carfagna: It’s very important this point that once you convince Someone it’s very difficult to make he come back, so you have to act against misinformation and disinformation information networks before with different, of course, different methods and take the problem from different points of view. That’s why I asked the same question to Mr. Pearse O’Donohue and how the rules work for this and how they can change because it’s a technology chain so fast that how can the rule follow the speed?

Pearse O’Donohue: Thank you. Yes. Is this working?

Barbara Carfagna: I can’t hear you now.

Pearse O’Donohue: No.

Barbara Carfagna: Maybe.

Pearse O’Donohue: Somebody turn me on.

Barbara Carfagna: We can. Yes, you can get the mic, sorry.

Pearse O’Donohue: Thank you. And thank you very much for the invitation here. It’s a privilege to speak. We’ve already heard several insightful answers. So I would perhaps compliment that taking a slightly different approach, which is to identify the most prevalent sources. The first thing is to say that it’s simply the volume, the number of different sources that exist that themselves contribute as a whole to the existence and prevalence of misinformation. And that is an issue to do with everything that has been said about its prevalence but also with scope and its range and speed. The second point is that we do at some point have to make a distinction between misinformation and disinformation. But it is the existence of disinformation, which is targeted, untrue statements, facts, even now videos with AI support, to actually mislead the public or individuals. In some cases, that is so obvious that a lot of users are unaware of the more benign but nevertheless nefarious misinformation, which is affecting their choices, which is affecting their daily lives. So we do have to accept that the most prevalent sources are the social media platforms. But I would say they’re particularly those platforms which are not sufficiently moderated, do not have sufficient safeguards in place, in order to identify for, or in some cases, prevent misinformation. And that is really the role when you asked me about rules, is that we have to see what is it that can be done to guide the industry, what is it that can be done to protect the user, but of course allowing the user to choose. And that is very important when we come to an open internet. If a government or a regulatory authority decides to step in and decide on what is misinformation and what is not, well then who moderates the regulator? And that becomes a permanent issue. So the rules have to be focused on allowing the individual to choose, but protecting them from disinformation, from ensuring that the providers, the platforms, et cetera, are actually capable of moderating their content, but that that is done in an objective way, and that dangerous material is actually flagged to the individual. When it comes to terrorist activity, criminal activity, or activity which puts at risk the lives of individuals, such as for example disinformation about vaccines, then of course there is a role for government to step in, but they should put the onus on the platforms to actually achieve that, rather than directly intervening onto the content platforms themselves. So this is the context in which we have these discussions in the Internet Governance Forum, in order to actually stop people going too far, and that we actually have an understanding of how it might work.

Barbara Carfagna: And what should be the key priorities for governments when developing policies and regulation to combat misinformation?

Pearse O’Donohue: Thank you. Well, the first thing is that in a forum like this, the governments must work with the other stakeholders, many of whom are the experts in the running of the Internet, whether it is the tech… technical community, academia, civil society through their NGOs, and of course business, they can learn and work with them. It will always be more effective if it is done in that multi-stakeholder way. But what governments could do, taking their roles and legitimate responsibilities is, as I’ve said, to ensure that the framework exists, that the platforms, the providers of the social media platforms, know clearly what they must do and have put in place sufficient mechanisms of moderation, independent content moderation, but also in the ultimate cases, which hopefully should be relatively minimal, that they have the ability to rapidly take down dangerous material, criminal, terrorist, or other similar material for the protection of individuals, but also that there is a sufficient transparency and redress in the mechanisms that governments or regions put in place so that we can learn from any mistakes that we make. Obviously, we’re not always going to get it right, so that errors or mislabelling can be corrected and redress is effective to the individuals or to the companies or to the groups who may feel that they are being unnecessarily or unfairly censored, and that is a very important part. So again, an independent monitoring is critical, independent of government, academic and other experts who can objectively, independently give a view on the functioning of these processes, but also in some cases actually be the experts the content itself.

Barbara Carfagna: EU as a European Union has introduced the regulation for misinformation in the Digital Services Act. Will other countries follow and make a similar rule like it happened for the GDPR, in your opinion?

Pearse O’Donohue: Thank you. We do think that what we have done can be of interest and of use to other countries and regions. You mentioned the GDPR is a very good example. Perhaps one of the reasons why we could help others to develop their systems is because while Europe and Europeans are major consumers of the major social platforms, most of those platforms are not actually European-based or of European origin. So we have had, shall we say, a different objective than some countries. Secondly, because we have within Europe very different cultures, very different experiences, even the linguistic element is very important actually in this area, that that will help others. It’s not a unilingual system that we have had to put in place and we need to deal with multiculturalism and differences of culture, religion and ethnic tradition. So yes, but in all modesty, as I’ve said before, there is always an opportunity that in one or other area what we do isn’t quite right and we have to address. So we have to learn from our mistakes and we can learn from others. Secondly, of course, for the very reason that we want the internet to be localized to address different linguistic and ethnic cultures, there will always be a need for some modification and adaptation, for example, of the rules that are appropriate for Europe for other countries and regions. But we do feel again in the IGF and other fora that that’s exactly where we can have these discussions so that others can learn not from us but with us from the experiences, for example, of the Digital Services Act as we implement that important legislation and find tools that are appropriate for their case but which will achieve the same objective, which is an open, safe internet where there is free access to information but there is also a clear barrier to disinformation and misinformation. Thank you.

Barbara Carfagna: Thank you. Mr.Mohammed Ali Al-Qaed, what emerging technologies are proving most effective in identifying and mitigating misinformation, and how can their adoption be scaled responsibly?

Mohammed Ali Al-Qaed: There are many fact-check tools, you know, by Google and by many others, and even the image and video verification tools. The problem with those tools is that it requires the recipients to put some effort to go and look for each piece of information, which requires time and effort to verify what is going on. And those tools, of course, they’re using different technologies, but, you know, mostly we are changing to use the AI and machine learning. The legislations and the process to make them effective, I think, is the most important thing, because if we cannot identify the source of information, there is a misinformation that is spread around, and then you cannot source who started that. I think that makes it very difficult, because then the harm is there, and then you cannot find out who’s behind that piece of information. And I think if we can introduce the verify-by-design tools, that you know, tag the information, you’re not hiding, you know, or not letting the information going out, but tagging the information, so at least the user can see that tag, and then look at what is the perceptions, let’s say, of the tools about that piece of information. I think that might minimize the harm of the data which is going out. And the government of India, you know, they introduced the legislations with which is a mandate on all the social media platforms to identify the source of information. So that kind of thing, I think, of legislation is required with many other measures that I think we have to collectively work together with the society and the stakeholders.

Barbara Carfagna: Thank you. Mr. Alwagait, we have seen, I’ve been in Saddaia, so I’ve seen that study each technology and try to examinate which is the best one for the purpose you have. So what are the emerging technologies that are proving most effective in identifying and mitigating misinformation?

Esam Alwagait: Sure. So to fight misinformation, you have two issues. First of all, how to detect the fake media or information. And the second part, what do you do when you do that? So detecting misinformation, there are a lot of technologies and we have the case of fighting fire with fire. So if AI is used to generate misinformation, then you have AI tools to detect those content. For example, we have machine learning and NLPs that could analyze the linguistic patterns and detect the manipulated text. There are AI tools to analyze video and audio to detect, for example, the pitch of the sound or the facial movement and so on to detect the fake generated content by AI. So that’s detecting it. But the most important part is what do you do? And I think that there should be a collaboration between tech companies, governments, academia and international organizations to come up with innovative regulations to combat misinformation. And when I say innovative regulations, I mean the kind of regulations that does not hinder the innovation. Because we all know that sometimes too much regulations will slow down the innovation and the lack of regulation will allow cases like misinformation. So innovative regulation is a sweet spot where you have this balance between having regulations and enabling innovation. For example, in Saudi Arabia, we worked with the global AI advisory body by the UN to create more regulations for ethical. and responsible AI. We have also established the International Center for AI Research and Ethics here in Riyadh, allowing these kind of regulations and enabling ethical AI. So to combat misinformation, you have the tools detected, but you need to have the regulations that enforces these tools.

Barbara Carfagna: Do you have system to monitor the behavior of the message, the fake message? Because I think that in the speedness, maybe this is the most effective thing to do to stop, because like we heard before, if you have someone convinced you can come back. So how you try to stop the message?

Esam Alwagait: So a lot of social medias, they do have an AI driven fact checking tools, so that immediately when you have the content, it will be flagged based on how active, if there is something alarming, it will have an automatic flag. Other social medias, for example, they crowdsource this. So they would allow the online community to flag this misinformation and provide their ideas about it. So as you mentioned, sometimes the misinformation could be stopped even before it starts using these tools.

Barbara Carfagna: Okay, thank you. So starting from your consideration, how can governments, tech companies, media and civil society work together to create a unified strategy for combating misinformation? I ask you to Mrs. Natalia Gherman.

Natalia Gherman: Thank you. I believe that one way governments, tech companies, media and civil society can work together is to use international mechanisms such as Internet Governance Forum for such purpose. we have this week a fantastic opportunity here in Riyadh to put our heads together and to develop a unified strategy. And the key players must also take advantage of focused global or regional events for that purpose that are many more. And I can bring a very good example in the sense back in 2022, the United Nations Security Council Counter-Terrorist Committee organized a special meeting in India, in New Delhi, that gathered governments of the United Nations, member states, technological companies, civil society, research, academia, media, and all were researching and analyzing a very important issue of misuse of new technologies for terrorist purposes, international capacity building, and of course that all through the lens of respecting human rights. And the outcome of that meeting was the Delhi Declaration, which led to the development of the so-called non-binding guiding principles for all United Nations member states on new payment technologies, unmanned aircraft systems, and of course, information and communication technologies. My office CTAT was tasked with development of the drafts of those non-binding principles. We had to work and collaborate with more than 100 partners from the governmental sector, civil society, academia. And of course, we’ve learned a great number and a host of good practices, lessons learned, and effective operational measures. So… Some of the ideas and suggestions that were put forward by our partners included ways to counter myths and disinformation through digital and media literacy campaigns that still remains extremely relevant, teaching critical thinking skills, and building resilience at all levels of society to violent extremism and to terrorist messages. So there were also suggestions to develop guidelines for strategic communications and counter-messaging algorithms, as well as developing cross-platform reporting mechanisms. And similar efforts, both global but also focused fora and platforms, do help to build consensus and trust among relevant stakeholders. And of course, our aim should be the development of an operational plan to combat misinformation globally. So the United Nations Security Council, as always, took the lead and highlighted the need to develop public-private partnerships through voluntary cooperation to address the exploitation of information communication technologies in no less than six resolutions on counterterrorism since 2017. And in the United Nations, we are increasingly consolidating our cooperation with such partners as Tech Against Terrorism, the Christchurch Call, and the industry-led Global Internet Forum to Counterterrorism. And there are many more other good examples on private and public partnerships. So the key actors could draw on the playbook for countering terrorist narratives online as late as I said in the Security Council relevant resolutions, and also in the comprehensive international framework to counter terrorism narratives. And this framework that the United Nations offers to all member states in the world, lays measures states could take to include legal and law enforcement measures, cross-sector collaborations, and the development of strategic communications among other things. Thank you.

Barbara Carfagna: Thank you very much. Mr. Khaled Mansour, we have also seen that there are some protocols that we are trying to build, like the C2PA or others. Do they work? Or in your opinion, how can we cooperate?

Khaled Mansour: Thank you, Barbara. Let me just take two steps back since I’m the only member of this panel who doesn’t represent the government or multilateral organization. Let me take a different track at how we can have strategies to counter misinformation. Firstly, I don’t think we have to have a unified strategy. I think we are different actors, all of us. Governments have their own responsibilities. Every actor has its own interests, its own set of interests and priorities. Global regimes are not necessarily the only solution. Global transparency, a venue like that, a forum like that is a step in the right direction where we speak to each other but hold each other accountable because at the end of the day, we come from different frameworks. Secondly, people like us in the Oversight Board, and the Oversight Board is a self-regulatory body for meta-platforms. So we are independent, funded by an irrevocable trust from meta. We can tell meta to remove content or to return content that they removed and give them advice. Our guiding principle doesn’t start from safety and protection, it starts from freedom from suppression. That’s a different approach to misinformation. So, we defend freedom of expression as long as it doesn’t undermine the rights of others. So for misinformation to be labeled as such or to be removed, it must be very clear and legitimate laws in place to remove it, not ambiguous definitions to achieve unjustifiable ends and suppress views in the absence of imminent and likely harm. And this is a key word because tackling misinformation, and a lot of misinformation is just harmless, useless, to be left maybe only labeled. But tackling misinformation should be proportional to the likelihood and eminence of harm. That’s a very important distinction that we always have to make. I’m not talking about clear criminal activities, recruiting for terrorist organizations, child trafficking, all of that is clear stuff that should be handled. Because we have to admit there is a balance that we need to make between allowing people to express their views freely and protecting users from harm. Not all misinformation, and I repeat again, is harmful. But when misinformation can incite violence or undermine public safety or directly harm individuals, we need to act. And I would claim this is not the majority of pieces of misinformation. And there are various ways to handle misinformation. Removal is not necessarily the best way to approach it. For example, earlier this year, the oversight board, we told META to leave content in place with a manipulated video of President Biden. We advised META to stop removing manipulated material, that’s video, audio, text, which is manipulated by AI or otherwise, unless the content clearly violates policies. is, again, pornography, child trafficking, terrorist activities, et cetera, et cetera, or violates human rights. Now, it’s important to tell users that this content is manipulated. And our advice, our approach is that Meta should then label that content as significantly altered. This is being transparent with users, with us, and useful without having to remove content. And Meta, indeed, started labeling all AI content that they can detect using tools, like Mr. Alwagait pointed out, using tools to show users understand that this video of that president is actually manipulated, or that video of that candidate in a campaign or elections is manipulated. AI is not the challenge. AI exacerbates, speeds up, accelerates, but it’s not the challenge. States, corporations, and humans like us are the ones who sometimes abuse the system, using AI or using other tools. And our strategies should be focused on fighting, not on fighting these tools, but on actually using them, again, as Mr. Alwagait said, to expose, and sometimes even label, or remove this harmful content.

Barbara Carfagna: Thank you very much. So I ask, we had just less than nine minutes. I ask each of you a quick remark, a final message to leave to our public.

Esam Alwagait: Sure. So misinformation is very dangerous. In cases like COVID-19, it costs lives of people. We need to fight it. We need to have this collaboration between the government, academia, the tech companies, the international organizations, to come up with proper regulations to combat misinformation. And I’d like to reiterate our commitment here in Saudi Arabia to work locally and globally to reach to such regulations. Thank you.

Barbara Carfagna: Thank you.

Deemah Al-Yahya: I just wanted as well to build on Her Excellency Natalia’s comment on the differences and how can cooperation help. Before this session, because of the power of platforms like IGF and with the support of Saudi Arabia hosting IGF this year, we conducted a roundtable where we had our member states and also states that were not members to come together. And what we found is that the challenge is similar. The challenges are the same, but the way how to tackle the challenge is different. And by just one seating together, all shared best practices together, which reduced time to expedite solving the problem. And as consensus of all our members mandated by our chair, the Minister of Kuwait, we are going to have another meeting as well, the same to invite private sector and social media platforms to be part of that discussion to create, as His Excellency mentioned, the right regulations and standards that within consensus all member states can adopt and therefore respected by the social media platforms. My message is, let’s continue cooperating and we have to act now, but in a way where we put hands in hands rather than working independently and in silos. Thank you.

Barbara Carfagna: Thank you.

Mohammed Ali Al-Qaed: I just wanted to highlight that, you know, not everything is negative with the technologies and what’s going on. The technologies allowed, you know, the globe to come closer and, you know, having a similar information at the right time. So, you know, having a common ground between each everybody. But usually, you know, the governments and the society reacts to the. misinformation only the those that are you know spread most or having a higher impact the problem that the misinformation which goes into smaller groups that doesn’t spread so much in smaller societies or smaller groups that nobody is you know taking care of where it could build to a deeper beliefs on those people and making it much difficult and maybe in the future causing a bigger problems I think that we have to we have to address the other thing that you know countries alone or smaller countries they have a less influence on companies tech companies so this is why I think regionally we have to work together to put the regulations and the mechanisms to combat the misinformation thank you.

Natalia Gherman: I would like to reiterate that the threat posed by misinformation but also by terrorist and violent extremist narratives is rapidly evolving so should the response by the states and all different stakeholders and the states of course have to be technologically agile to understand the nature of the threat and to counter of that but this approach should involve all of the society it has to be a government civil society non-governmental partners academia research and the private sector only in this case we will be success and I also want to draw our attention to sometimes unintended consequences of the effort to counter both terrorist narratives and misinformation when it comes to human rights, freedom of expression, freedom of opinion, also journalism and privacy. And the human rights cannot be compromised. Solutions for the spread of misinformation, illicit content online must be grounded in a share commitment to human rights principles. Thank you.

Barbara Carfagna: Thank you.

Pearse O’Donohue: Thank you. Well, if I may be so bold as to talk in terms of principles, which we have heard here today on the panel. And the first is the protection of the individual, which we must strive to achieve. But the second is the preservation of freedom of speech. And there are times when those two first principles can be perceived to be in conflict. But it’s particularly in the mechanism or the weight of the procedures that we put in place to protect the individual, which can, if misused, actually hinder and block freedom of speech. And blocking freedom of speech, freedom of expression, is itself harmful to the individual as it is harmful to society. So that’s why we have to get it right. So the issue of accountability, a principle, the principle of transparency are critical to achieving the right balance in tackling misinformation. I do agree that not all misinformation can be classified as bad, and that we therefore need a gradual response. And of course, we preserve our most direct and intrusive measures for those content, which is clearly supporting terrorist, criminal, or other dangerous philosophies or content. With that in mind, cooperation is therefore the way of doing things. I do agree that individual countries cannot achieve the same as regions can. And that is one of the reasons why the European Union has acted. And we are very keen to discuss and share in the recognition of diversity with other regions who may seek to achieve the same objectives. And that is bringing me to my last principle, which is that it’s not just freedom of expression. It is the open internet, which is available to all, which is of so great importance to economies and societies throughout the world, which we are here to seek to preserve and develop. Thank you.

Barbara Carfagna: Thank you.

Khaled Mansour: Thank you very much, Barbara. I think we spoke a lot about misinformation, how to define it, how to counter it, how to deal with it. Let me conclude by talking about good information because supporting good information, accurate media, credible exchange of information is paramount if we are to counter misinformation root causes. And this should be our major objectives as governments, as the tech industry, which is not represented on this panel, and as civil society and content moderators. Because we have to submit that there is a fine balance as Pierce just pointed out. There is a fine balance that we have to make between respecting and supporting freedom of expression and human rights and accurate information on one hand, and how to address harmful, and I underline harmful, imminently unlikely harmful misinformation the other hand. And in reaching a good, fine balance between these two overriding objectives lies the challenge that we all face, even if we have different roads to reaching our objective. Thank you.

Barbara Carfagna: So we had finished at 0, 0, 0, 0. So we are perfect speaker. I give my conclusion that is, as we know, we are facing a real revolution that is not industrial. It’s a human for the first time as humans, we are acting in the world together with artificial agents. The agents are organizing also our lives and they’re acting. with us. In these days, we are seeing for the first time how they act and organize seeing my generative AI can talk with the hair generative AI and they organize a meeting together for us. So this is a huge revolution that we can’t face with the tools we had before. That’s why we are building an ecosystem, a new ecosystem. And it is the governance that can lead an ecosystem, not the single vertical domains. That’s why I thank you once more Internet governance forum for this panel, and also for starting this event with this panel, because this is the most important topic probably we have to afford in building our next society. Thank you. Thank you.

E

Esam Alwagait

Speech speed

128 words per minute

Speech length

730 words

Speech time

341 seconds

Social media platforms are the main source of misinformation spread

Explanation

Esam Alwagait identifies social media platforms as the primary source for spreading misinformation. He emphasizes that these platforms allow for rapid and widespread dissemination of false information.

Evidence

Cites a UNESCO report stating that 68% of people get their news from social media, while only 20% get it from online media.

Major Discussion Point

Sources and Spread of Misinformation

Agreed with

Natalia Gherman

Mohammed Ali Al-Qaed

Agreed on

Social media platforms are major sources of misinformation

AI and machine learning tools can detect manipulated content

Explanation

Alwagait discusses the use of AI and machine learning technologies to combat misinformation. He explains that these tools can analyze linguistic patterns, video, and audio to detect fake or manipulated content.

Evidence

Mentions specific examples such as analyzing facial movements and sound pitch in videos to detect AI-generated content.

Major Discussion Point

Technologies to Combat Misinformation

Agreed with

Mohammed Ali Al-Qaed

Agreed on

AI and technology can be used to combat misinformation

Differed with

Khaled Mansour

Differed on

Focus of misinformation strategies

Innovative regulations are needed to combat misinformation while enabling innovation

Explanation

Alwagait argues for the development of innovative regulations to address misinformation. He stresses the importance of finding a balance between implementing effective regulations and not hindering technological innovation.

Evidence

Cites Saudi Arabia’s work with the UN’s global AI advisory body to create regulations for ethical and responsible AI, and the establishment of the International Center for AI Research and Ethics in Riyadh.

Major Discussion Point

Regulatory Approaches to Misinformation

N

Natalia Gherman

Speech speed

107 words per minute

Speech length

1095 words

Speech time

613 seconds

Unmoderated online spaces are major hubs for misinformation

Explanation

Gherman identifies unmoderated online spaces as significant sources of misinformation. She highlights that platforms with lax content policies or those lacking capacity to moderate effectively are particularly problematic.

Evidence

Mentions social media platforms, messaging systems with deliberately lax content policies, and small platforms lacking capacity to effectively moderate content.

Major Discussion Point

Sources and Spread of Misinformation

Agreed with

Esam Alwagait

Mohammed Ali Al-Qaed

Agreed on

Social media platforms are major sources of misinformation

Influencers with large followings can rapidly spread misinformation

Explanation

Gherman points out that influencers with millions of followers can quickly disseminate misinformation. She notes that when combined with algorithms pushing content, this can flood social media and messaging services with false information.

Major Discussion Point

Sources and Spread of Misinformation

Global cooperation and frameworks are needed to address misinformation

Explanation

Gherman emphasizes the importance of international cooperation in combating misinformation. She suggests that global forums and frameworks can help stakeholders develop unified strategies to address the issue.

Evidence

Cites the example of the UN Security Council Counter-Terrorist Committee’s special meeting in India, which led to the development of non-binding guiding principles for UN member states on new technologies.

Major Discussion Point

Regulatory Approaches to Misinformation

Agreed with

Deemah Al-Yahya

Pearse O’Donohue

Agreed on

Multi-stakeholder collaboration is crucial in combating misinformation

Solutions must be grounded in human rights principles

Explanation

Gherman stresses that efforts to counter misinformation must not compromise human rights. She argues that solutions for addressing illicit content online must be rooted in a shared commitment to human rights principles.

Major Discussion Point

Balancing Free Speech and Misinformation Control

M

Mohammed Ali Al-Qaed

Speech speed

151 words per minute

Speech length

958 words

Speech time

379 seconds

Misinformation spreads faster and has wider reach than before

Explanation

Al-Qaed highlights that technological advancements have changed how information spreads. He notes that misinformation now propagates much faster and reaches a wider audience compared to traditional media.

Evidence

Provides an anecdote about receiving news 30 minutes after it was aired, illustrating the rapid spread of information.

Major Discussion Point

Sources and Spread of Misinformation

Social media algorithms promote sensational content

Explanation

Al-Qaed points out that social media algorithms tend to prioritize sensational or clickable content. This can lead to the increased visibility and spread of misinformation.

Major Discussion Point

Sources and Spread of Misinformation

Agreed with

Esam Alwagait

Natalia Gherman

Agreed on

Social media platforms are major sources of misinformation

Fact-checking and content verification tools exist but require user effort

Explanation

Al-Qaed acknowledges the existence of fact-checking and content verification tools. However, he notes that these tools require users to actively seek out and use them, which can be time-consuming and effortful.

Evidence

Mentions Google’s fact-check tools and image and video verification tools as examples.

Major Discussion Point

Technologies to Combat Misinformation

Agreed with

Esam Alwagait

Agreed on

AI and technology can be used to combat misinformation

“Verify-by-design” tools could tag information at the source

Explanation

Al-Qaed suggests the implementation of “verify-by-design” tools that would tag information at its source. This approach would allow users to see the tag and understand the perceived reliability of the information without completely blocking its dissemination.

Major Discussion Point

Technologies to Combat Misinformation

Regional cooperation can give smaller countries more influence with tech companies

Explanation

Al-Qaed argues that individual countries, especially smaller ones, have limited influence over tech companies. He suggests that regional cooperation could give these countries more leverage in addressing misinformation issues with tech giants.

Major Discussion Point

Multi-stakeholder Cooperation

K

Khaled Mansour

Speech speed

135 words per minute

Speech length

1506 words

Speech time

666 seconds

Lack of access to accurate information contributes to misinformation spread

Explanation

Mansour argues that the absence of reliable, accurate information sources contributes to the spread of misinformation. He emphasizes the importance of information integrity and the challenges faced by credible media sources.

Evidence

Mentions the decline of accurate information from credible media sources due to economic reasons and the challenges in analyzing the flood of information.

Major Discussion Point

Sources and Spread of Misinformation

Misinformation policies must balance protecting users and preserving free speech

Explanation

Mansour stresses the need for a balance between protecting users from harmful misinformation and preserving freedom of expression. He argues that not all misinformation is harmful and that responses should be proportional to the likelihood and imminence of harm.

Major Discussion Point

Balancing Free Speech and Misinformation Control

Not all misinformation is harmful and removal is not always the best approach

Explanation

Mansour contends that not all misinformation is harmful and that content removal is not always the most effective solution. He suggests that other approaches, such as labeling or providing context, can be more appropriate in many cases.

Evidence

Cites an Oversight Board decision advising Meta to leave a manipulated video of President Biden in place but label it as significantly altered.

Major Discussion Point

Balancing Free Speech and Misinformation Control

Differed with

Pearse O’Donohue

Differed on

Approach to content removal

Labeling manipulated content can inform users without removing it

Explanation

Mansour advocates for labeling manipulated content rather than removing it outright. This approach, he argues, allows for transparency and user education while preserving freedom of expression.

Evidence

Mentions Meta’s implementation of labeling AI-generated content following the Oversight Board’s advice.

Major Discussion Point

Balancing Free Speech and Misinformation Control

Supporting accurate, credible information is key to countering misinformation

Explanation

Mansour emphasizes the importance of supporting and promoting accurate, credible information as a crucial strategy in combating misinformation. He argues that this should be a major objective for governments, tech industry, and civil society.

Major Discussion Point

Multi-stakeholder Cooperation

Differed with

Esam Alwagait

Differed on

Focus of misinformation strategies

D

Deemah Al-Yahya

Speech speed

117 words per minute

Speech length

507 words

Speech time

258 seconds

Governments, tech companies, academia and civil society must collaborate

Explanation

Al-Yahya emphasizes the need for collaboration between various stakeholders to address misinformation effectively. She argues that this collective approach is necessary due to the complexity and scale of the problem.

Evidence

Mentions the Digital Cooperation Organization’s efforts to facilitate collaboration between governments, innovators, and civil society to co-create initiatives addressing misinformation.

Major Discussion Point

Multi-stakeholder Cooperation

Agreed with

Natalia Gherman

Pearse O’Donohue

Agreed on

Multi-stakeholder collaboration is crucial in combating misinformation

P

Pearse O’Donohue

Speech speed

136 words per minute

Speech length

1382 words

Speech time

607 seconds

The EU’s Digital Services Act could be a model for other countries

Explanation

O’Donohue suggests that the EU’s Digital Services Act could serve as a model for other countries in regulating misinformation. He notes that the EU’s experience in dealing with diverse cultures and languages could be valuable for other regions.

Evidence

Cites the example of GDPR as a previous EU regulation that influenced global practices.

Major Discussion Point

Regulatory Approaches to Misinformation

Regulations should put onus on platforms to moderate content

Explanation

O’Donohue argues that regulations should place the responsibility for content moderation on the platforms themselves. He suggests that governments should set the framework but allow platforms to implement the necessary mechanisms.

Major Discussion Point

Regulatory Approaches to Misinformation

Agreed with

Deemah Al-Yahya

Natalia Gherman

Agreed on

Multi-stakeholder collaboration is crucial in combating misinformation

Differed with

Khaled Mansour

Differed on

Approach to content removal

Agreements

Agreement Points

Social media platforms are major sources of misinformation

Esam Alwagait

Natalia Gherman

Mohammed Ali Al-Qaed

Social media platforms are the main source of misinformation spread

Unmoderated online spaces are major hubs for misinformation

Social media algorithms promote sensational content

Multiple speakers identified social media platforms as primary sources for the spread of misinformation, citing their wide reach, rapid dissemination capabilities, and algorithmic promotion of sensational content.

Multi-stakeholder collaboration is crucial in combating misinformation

Deemah Al-Yahya

Natalia Gherman

Pearse O’Donohue

Governments, tech companies, academia and civil society must collaborate

Global cooperation and frameworks are needed to address misinformation

Regulations should put onus on platforms to moderate content

Speakers emphasized the need for collaboration between various stakeholders, including governments, tech companies, academia, and civil society, to effectively address the complex issue of misinformation.

AI and technology can be used to combat misinformation

Esam Alwagait

Mohammed Ali Al-Qaed

AI and machine learning tools can detect manipulated content

Fact-checking and content verification tools exist but require user effort

Speakers discussed the potential of AI, machine learning, and other technological tools in detecting and combating misinformation, while acknowledging the current limitations and need for user engagement.

Similar Viewpoints

Both speakers emphasized the need to balance protecting users from harmful misinformation with preserving freedom of expression, suggesting that platforms should be responsible for content moderation within a regulatory framework.

Khaled Mansour

Pearse O’Donohue

Misinformation policies must balance protecting users and preserving free speech

Regulations should put onus on platforms to moderate content

Unexpected Consensus

Labeling content as an alternative to removal

Khaled Mansour

Mohammed Ali Al-Qaed

Labeling manipulated content can inform users without removing it

“Verify-by-design” tools could tag information at the source

Despite representing different sectors (civil society and government), both speakers proposed similar approaches to addressing misinformation through labeling or tagging content rather than outright removal, suggesting an unexpected alignment on preserving information flow while enhancing user awareness.

Overall Assessment

Summary

The speakers generally agreed on the significant role of social media in spreading misinformation, the need for multi-stakeholder collaboration, and the potential of technology in combating the issue. There was also a shared emphasis on balancing user protection with freedom of expression.

Consensus level

Moderate to high consensus was observed among the speakers on the main challenges and broad approaches to addressing misinformation. This level of agreement suggests a promising foundation for developing coordinated strategies to combat misinformation, though specific implementation details may require further discussion and negotiation among stakeholders.

Differences

Different Viewpoints

Approach to content removal

Khaled Mansour

Pearse O’Donohue

Not all misinformation is harmful and removal is not always the best approach

Regulations should put onus on platforms to moderate content

Mansour argues for a more nuanced approach to content moderation, suggesting that not all misinformation is harmful and removal isn’t always necessary. O’Donohue, on the other hand, advocates for placing the responsibility of content moderation on platforms, implying a more active approach to content removal.

Focus of misinformation strategies

Esam Alwagait

Khaled Mansour

AI and machine learning tools can detect manipulated content

Supporting accurate, credible information is key to countering misinformation

Alwagait emphasizes the use of technological solutions to detect misinformation, while Mansour argues for a focus on promoting accurate and credible information as a key strategy.

Unexpected Differences

Role of global cooperation

Natalia Gherman

Khaled Mansour

Global cooperation and frameworks are needed to address misinformation

Not all misinformation is harmful and removal is not always the best approach

While Gherman emphasizes the importance of global cooperation and frameworks to address misinformation, Mansour’s focus on nuanced approaches and preserving free speech suggests a potential tension between global standardization and localized, context-specific responses to misinformation. This difference is unexpected given the general consensus on the need for collaboration among the speakers.

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to addressing misinformation, including content removal policies, the role of technology versus promoting accurate information, and the balance between global cooperation and localized responses.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the importance of addressing misinformation and the need for multi-stakeholder cooperation, speakers differ on the specific strategies and priorities. These differences reflect the complexity of the issue and suggest that a one-size-fits-all approach to combating misinformation may be challenging to implement. The implications of these disagreements highlight the need for flexible, context-specific solutions that can balance various concerns such as free speech, technological innovation, and effective regulation.

Partial Agreements

Partial Agreements

All three speakers agree on the need for innovative approaches to address misinformation that don’t hinder innovation or free speech. However, they propose different methods: Alwagait suggests broad innovative regulations, Al-Qaed proposes ‘verify-by-design’ tools, and Mansour advocates for labeling content rather than removing it.

Esam Alwagait

Mohammed Ali Al-Qaed

Khaled Mansour

Innovative regulations are needed to combat misinformation while enabling innovation

“Verify-by-design” tools could tag information at the source

Labeling manipulated content can inform users without removing it

Similar Viewpoints

Both speakers emphasized the need to balance protecting users from harmful misinformation with preserving freedom of expression, suggesting that platforms should be responsible for content moderation within a regulatory framework.

Khaled Mansour

Pearse O’Donohue

Misinformation policies must balance protecting users and preserving free speech

Regulations should put onus on platforms to moderate content

Takeaways

Key Takeaways

Social media platforms are the primary source for spreading misinformation due to their wide reach and rapid dissemination capabilities

AI and machine learning technologies can be effective in detecting and combating misinformation, but also pose risks in generating more sophisticated fake content

A multi-stakeholder approach involving governments, tech companies, civil society and academia is needed to effectively address misinformation

Regulations and policies must balance protecting users from harmful misinformation while preserving freedom of speech and an open internet

Supporting access to accurate, credible information is crucial in countering misinformation

Resolutions and Action Items

Develop innovative regulations that combat misinformation while enabling innovation

Create public-private partnerships and international cooperation frameworks to address misinformation globally

Implement AI-driven fact-checking and content verification tools on social media platforms

Establish ‘verify-by-design’ mechanisms to tag information at the source

Organize more focused global and regional events to develop unified strategies against misinformation

Unresolved Issues

How to effectively regulate smaller, unmoderated online spaces that serve as hubs for misinformation

Addressing misinformation in encrypted messaging apps and private groups

Developing a standardized approach to defining and classifying harmful vs. non-harmful misinformation

Balancing regional/cultural differences in misinformation policies while maintaining a global approach

Mitigating unintended consequences of misinformation countermeasures on human rights and privacy

Suggested Compromises

Labeling manipulated or AI-generated content rather than removing it entirely to balance free speech concerns

Implementing graduated responses to misinformation based on likelihood and imminence of harm

Focusing on behavior and spread patterns of messages rather than just content to identify misinformation

Combining technological solutions with media literacy and critical thinking education initiatives

Thought Provoking Comments

Misinformation kills. By spreading misinformation in conflict times from Myanmar to Sudan to Syria, this can be murderous.

speaker

Khaled Mansour

reason

This comment starkly highlights the real-world consequences of misinformation beyond just online discourse, emphasizing its potential for violence and harm.

impact

It shifted the conversation to focus more on the serious real-world impacts of misinformation, rather than just discussing it as an abstract online phenomenon.

We have seen an explosion in the number of gaming and social media platforms, messaging systems, and online spaces. So in terms of malicious content online, we in the United Nations highlight that unmoderated spaces are major hubs for misinformation and terrorist content.

speaker

Natalia Gherman

reason

This comment broadens the scope of the discussion to include newer, less regulated online spaces as sources of misinformation.

impact

It prompted further discussion on the challenges of regulating diverse online platforms and the need for comprehensive approaches to combat misinformation across various digital spaces.

If a government or a regulatory authority decides to step in and decide on what is misinformation and what is not, well then who moderates the regulator? And that becomes a permanent issue.

speaker

Pearse O’Donohue

reason

This comment raises a crucial point about the challenges of regulating misinformation without infringing on free speech or creating new problems of authority and censorship.

impact

It led to a more nuanced discussion about the balance between regulation and freedom of expression, and the need for transparent, accountable processes in combating misinformation.

To combat misinformation, you have the tools detected, but you need to have the regulations that enforces these tools.

speaker

Esam Alwagait

reason

This comment succinctly captures the dual nature of the challenge – technological solutions and regulatory frameworks – needed to address misinformation effectively.

impact

It helped bridge the discussion between technological solutions and policy approaches, encouraging a more holistic view of combating misinformation.

Not all misinformation can be classified as bad, and that we therefore need a gradual response. And of course, we preserve our most direct and intrusive measures for those content, which is clearly supporting terrorist, criminal, or other dangerous philosophies or content.

speaker

Pearse O’Donohue

reason

This comment introduces important nuance into the discussion, acknowledging that not all misinformation is equally harmful and suggesting a proportional response.

impact

It encouraged a more refined approach to addressing misinformation, moving away from blanket solutions and towards more targeted, context-specific strategies.

Overall Assessment

These key comments shaped the discussion by broadening its scope from a narrow focus on technological solutions to a more comprehensive examination of the misinformation challenge. They highlighted the real-world impacts of misinformation, the complexities of regulating diverse online spaces, the tension between regulation and free speech, and the need for nuanced, proportional responses. This led to a richer, more multifaceted conversation that acknowledged the interplay between technology, policy, and societal impacts in addressing misinformation.

Follow-up Questions

How can we develop innovative regulations that combat misinformation without hindering innovation?

speaker

Esam Alwagait

explanation

Finding this balance is crucial for effectively addressing misinformation while still allowing for technological progress

How can we implement ‘verify-by-design’ tools that tag information without restricting its dissemination?

speaker

Mohammed Ali Al-Qaed

explanation

This approach could help users identify potential misinformation without resorting to censorship

How can we develop cross-platform reporting mechanisms for misinformation?

speaker

Natalia Gherman

explanation

This could improve coordination and effectiveness in addressing misinformation across different online platforms

How can we better support and promote accurate, credible information sources?

speaker

Khaled Mansour

explanation

Focusing on promoting good information could be an effective strategy to counter misinformation at its root

How can we ensure that efforts to combat misinformation do not inadvertently compromise human rights, freedom of expression, and privacy?

speaker

Natalia Gherman

explanation

Balancing security measures with fundamental rights is crucial in developing effective and ethical approaches to misinformation

How can regional cooperation be leveraged to influence tech companies in addressing misinformation?

speaker

Mohammed Ali Al-Qaed

explanation

Smaller countries may have limited influence individually, but regional cooperation could provide more leverage in negotiations with tech giants

How can we develop public-private partnerships to address the exploitation of information communication technologies?

speaker

Natalia Gherman

explanation

Collaboration between governments and private sector could lead to more comprehensive and effective solutions

How can we improve digital and media literacy campaigns to counter myths and disinformation?

speaker

Natalia Gherman

explanation

Enhancing public awareness and critical thinking skills could help build societal resilience against misinformation

How can we develop guidelines for strategic communications and counter-messaging algorithms?

speaker

Natalia Gherman

explanation

These tools could help in proactively addressing misinformation at scale

How can we better understand and address misinformation in smaller, less visible groups where it may be building deeper beliefs?

speaker

Mohammed Ali Al-Qaed

explanation

Focusing only on widely spread misinformation may overlook potentially dangerous localized misinformation

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.