Safeguarding the free flow of information amidst conflict | IGF 2023 WS #386

10 Oct 2023 05:00h - 06:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Rizk Joelle

Digital threats and misinformation have a significant negative impact on civilians residing in conflict zones. The dissemination of harmful information can exacerbate pre-existing social tensions and grievances, leading to an increase in violence and violations of humanitarian law. Furthermore, the spread of misinformation can cause distress and a psychological burden among individuals living in conflict-affected areas. This hampers their ability to access potentially life-saving information during emergencies. The distortion of facts and the influence of beliefs and behaviours as a consequence of the dissemination of harmful information also contribute to raising tensions in conflict zones.

One concerning aspect is the blurred line between civilian and military targets in the context of digital conflicts. Civilians and civilian infrastructure are increasingly becoming targets of digital attacks. With the growing emphasis on shared digital infrastructure, there is an increased risk of civilian infrastructure being targeted. This blurring of lines undermines the principle of distinction between civilians and military objectives, which is a critical pillar of international humanitarian law.

Moreover, digital threats pose a threat to public trust in humanitarian organizations. Cyber operations, data breaches, and information campaigns not only damage public trust but also hinder the ability of humanitarian aid organizations to provide life-saving services. This erosion of trust compromises their efforts to assist and support individuals in need.

To address these challenges, it is crucial for affected communities to build resilience against harmful information and increase awareness of the potential risks and consequences in the cyber domain. Building resilience requires the involvement of multiple stakeholders, including civil society and companies. Information and communication technology (ICT) companies, in particular, should be mindful of the legal consequences surrounding their role and actions in the cyber domain. It is important that self-imposed restrictions or sanctions do not impede the flow of essential services to the civilian population.

In addition to community resilience and awareness-building efforts, policy enforcement within business models is crucial. Upstream thinking in the business model can help reinforce policies aimed at countering digital threats and misinformation. However, the discussion around policy enforcement in business models is challenging. It requires expertise and a feedback loop with tech companies to find effective and efficient solutions.

In conclusion, digital threats and misinformation have dire consequences for civilians in conflict zones. The dissemination of harmful information exacerbates social tensions and violence, while digital attacks on civilians and civilian infrastructure blur the line between military and civilian targets. These threats also undermine public trust in humanitarian organizations and hinder the provision of life-saving services. To tackle these challenges, it is essential to build community resilience, increase awareness, and enforce policies within business models. Collaboration between stakeholders and tech companies is key to addressing these complex issues and safeguarding the well-being of individuals in conflict zones.

Speaker

In conflict zones, technology companies face a myriad of risks and must carefully balance the interests of multiple stakeholders. These companies play a critical role in providing essential information and functions but can also unintentionally facilitate violence and spread false information. One major challenge is responding to government demands, such as granting access to user information, conducting surveillance, or shutting down networks. These demands can come from both sides of the conflict and may lack clarity or have excessively broad scope.

Dealing with government demands during peace is limited in conflict situations due to associated risks. Companies can request clarity on demand legality, respond minimally or partially, challenge the demands, or disclose them publicly. However, in conflict settings, these actions may pose significant risks.

To navigate these challenges, technology companies can implement various measures. These include establishing risk management frameworks, clear escalation procedures, and consistent decision reviews. By doing so, companies can better manage risks of operating in conflict zones. Collaboration with other organizations in coordinating responses in conflict regions and consulting with experts to understand potential implications of decisions can also help.

Respecting international humanitarian law is a key principle of corporate responsibility in conflict situations. Companies are expected to respect human rights and require guidance on respecting international humanitarian laws when conducting business in conflict-affected areas. Enhanced due diligence, considering heightened risks and negative human rights impacts, is recommended by the United Nations Guiding Principles on Business and Human Rights.

Further articulation is needed on what international humanitarian law means for technology companies, indicating further guidance is needed in this area. To address design issues in platforms, companies should consider building the capacity to apply a conflict lens during product development, better identifying and resolving issues in conflict zones.

Addressing information topics requires considering both upstream and downstream solutions. This comprehensive approach takes into account the flow of information from sources (upstream) to distribution and consumption (downstream).

Overall, technology companies operating in conflict zones face unique challenges and must navigate complex risks. Implementing effective risk management frameworks, respecting international humanitarian law, and incorporating a conflict lens into product development can better address the multifaceted issues they encounter. Further guidance is needed in certain areas to ensure operations in conflict zones align with established principles and standards.

Chantal Joris

The analysis delves into the challenges surrounding the free flow of information during conflicts. It starts by highlighting the digital threats that journalists and human rights defenders face in such situations. These threats include mass surveillance, content blocking, internet shutdowns, and other forms of coercion aimed at hindering the dissemination of information. The sentiment towards these challenges is negative, as they pose a significant threat to the values of freedom of expression and access to information.

Another significant aspect explored in the analysis is the role of tech companies in conflicts. Digital companies have become increasingly important actors in these situations, and the analysis argues that they have a responsibility to develop strategies to avoid involvement in human rights violations. This neutral stance reflects the need to address the complex ethical dilemmas faced by tech companies, balancing their business interests while safeguarding human rights.

The analysis also discusses the reliance of civilians on information communication technologies (ICT) during conflicts. Civilians often use ICT to ensure their safety, gain information on conflict conditions, locate areas of fighting, and communicate with their loved ones. This neutral sentiment highlights the significance of ICT in providing vital communication channels and access to information for affected civilians.

The analysis further sheds light on the attempts made by the army and political parties to control the narrative and shape the discourse during conflicts. Conflict parties often aim to manipulate information and control the narrative for various reasons. This negative sentiment highlights the detrimental impact of information control on the public’s understanding of conflicts and the potential for shaping biased opinions.

A key observation from the analysis is the necessity of a multi-stakeholder approach in conflict contexts. It stresses the importance of different actors, such as ICT companies, content moderators, and organizations like the International Committee of the Red Cross (ICRC), working collaboratively to tackle the diverse threats to information flow. This positive sentiment reflects the recognition that no single entity can address the complexities of information challenges during conflicts alone.

Moreover, the analysis calls for identifying gaps in understanding and addressing the issues related to information flow during conflicts. This neutral sentiment highlights the need for more clarity and targeted efforts to bridge these gaps. The conclusion emphasizes the importance of comprehensively addressing the challenges and harnessing the potential of information communication technologies to ensure the free flow of information during conflicts.

In conclusion, the analysis explores the various challenges and dynamics surrounding the free flow of information during conflicts. It highlights digital threats, the role of tech companies, civilian reliance on ICT, information control by conflict parties, the necessity of a multi-stakeholder approach, and the need for identifying gaps for clarity. With this comprehensive understanding, stakeholders can work towards developing strategies and policies that uphold the values of information access and freedom of expression in conflict situations.

Khattab Hamad

Sudan is currently embroiled in a civil war between two allied forces that began in 2013. However, the conflict has been riddled with challenges and disagreements, particularly regarding security agreements and the unification of the armies in Sudan. These disagreements resulted in the conflict’s end on April 15th. Unfortunately, the sentiment surrounding this war is negative.

Information control has played a significant role in the conflict, with internet disruptions and the spread of misinformation being notable events. Internet shutdowns during exams and civil unrest have been used by authorities to manipulate public opinion. The sentiment towards these events is negative.

Another issue in the conflict is the misuse of social media platforms, which have been exploited by both sides to spread their own narratives and manipulate public opinion. This misuse has prompted concerns about information imbalance and led platforms like META to take down accounts associated with the Rapid Support Forces. The sentiment towards this misuse is negative.

The RSF (Sudanese Armed Forces) and the Arab Support Forces have been criticized for their harmful practices towards civilians and the nation’s infrastructure. Privacy violation cases, including the use of spyware, have been reported. The RSF imported the predator spyware of Intellexa, while the National Intelligence and Security Service (NISS) imported the remote control system of the Italian company hacking team in 2012. The sentiment towards these privacy violations is negative.

The conflict has also had a significant impact on the ICT (Information and Communication Technology) sector in Sudan. Power outages have impaired network stability and e-banking services, forcing ICT companies to rely on uninterruptible power supply systems and generators. The sentiment towards this situation is negative.

On a positive note, telecom workers have been recognized as crucial for maintaining access to information infrastructure during conflicts. It is argued that they should be given extraordinary protection, similar to doctors and journalists, due to their vital role in ensuring the continuous flow of information. The sentiment towards this proposal is positive.

In conclusion, Sudan’s civil war has had far-reaching consequences, impacting security agreements, information control, privacy rights, the ICT sector, and the protection of key players in the information infrastructure. Efforts to address these challenges and protect these key players are essential for promoting peaceful resolutions and mitigating the impact of future conflicts.

Tetiana Avdieieva

During the armed conflicts in Ukraine, there have been severe restrictions on free speech and the free flow of information. Since the war began in 2014, the country has witnessed a decline in the protection of free speech and access to information. This has resulted in mass surveillance, content blocking, Internet shutdowns, and sophisticated manipulation of information.

Digital security concerns have also arisen during these conflicts. Attacks on media outlets and journalists largely originate from Russia, with DDoS attacks on websites disrupting connectivity. Coordinated disinformation campaigns on social media and messaging platforms further exacerbate the situation, influencing public opinion and spreading false narratives.

One key issue highlighted is the control over narratives and the free flow of information during armed conflicts. The ability to shape public opinion becomes a powerful tool in these circumstances, with the potential to influence the course of the conflict and its outcomes. It is crucial to address this issue by formulating an exit strategy that lifts restrictions from the outset of the armed conflict. This strategy should consider the vulnerability of post-war societies to malicious narratives and work towards reestablishing human rights that were restricted during the conflict.

Another significant concern is the gap in international law regarding the handling of information manipulation during peace and conflict. Current legal frameworks do not adequately address the issue, leaving room for exploitation and the spread of disinformation that incites aggression and hatred.

There have also been attempts to shift the focus away from the harm inflicted upon civilians and the suppression of opposition during these conflicts. These attempts to change the narrative divert attention from the atrocities committed and the need to protect the rights and safety of civilians.

The extensive support for the invasion among the Russian community is a cause for concern. According to data from Meduza, a significant portion of Russian citizens, ranging from 70% to 80%, support the invasion. This highlights the challenge of countering misinformation and disinformation within Russia and addressing the narratives that drive aggression and illegal activities.

The role of ICT companies in moderating harmful content in conflict settings is crucial. These companies need assistance, both globally and locally, to effectively combat harmful information. This includes distinguishing between harmful information and illegal content, as well as understanding the localized contexts in which they operate. Local partners can provide valuable insights into regional issues, such as identifying and addressing local slur words and cultural sensitivities.

However, it is important to approach the role of tech giants with caution, avoiding a strategy of blaming and shaming. Over-censorship and driving people to unmoderated spaces can be unintended consequences of such an approach. Instead, a collaborative approach that involves ICT companies, multi-stakeholder engagement, and responsible corporate practices is necessary to foster a safer online environment.

In conclusion, the armed conflicts in Ukraine have led to significant restrictions on free speech and the free flow of information. Digital security concerns, information manipulation, and the spread of disinformation within Russia pose additional challenges. It is crucial to adopt an exit strategy that lifts restrictions and safeguards vulnerable post-war societies from malicious narratives. Efforts should also be made to address gaps in international law regarding the handling of information manipulation. The support for the invasion among the Russian community and attempts to divert attention from civilian harm and opposition suppression further complicate the situation. ICT companies play a crucial role in moderating harmful content, and a collaborative approach is necessary to strike a balance between curbing misinformation and ensuring freedom of expression.

Audience

An analysis conducted by Access Now reveals that prevailing trends in content governance are endangering freedom of expression and other fundamental rights. Several issues have been identified in relation to parties involved in conflicts, highlighting the dangers faced by these rights.

During times of crisis, content governance has been exploited in various ways, breaching international humanitarian law. One concerning practice is the intentional spread of disinformation as a warfare tactic. Additionally, platforms have been used for population movement, and sharing content depicting prisoners of war illegally has been observed. These actions not only violate international laws but also contribute to the erosion of freedoms.

While internet restrictions exist in conflict zones, it is interesting to note that Russia maintains significant accessibility to various platforms. Many Ukrainian media and telegram channels continue to be effectively available in Russia. Furthermore, despite restrictions, information can still flow through various social media and messaging platforms. This highlights the complexity of internet restrictions and the need for further examination.

The analysis also underlines the need for international laws addressing informational warfare. Both Russia and Ukraine face internet warfare, yet there is a lack of legal frameworks specifically designed to address this issue. The absence of such laws creates a significant gap in addressing and countering the threats posed by disinformation campaigns and cybersecurity breaches.

Russia particularly faces numerous cybersecurity threats and disinformation campaigns, primarily originating from Ukraine. Instances of Russian citizens’ personal data being leaked and published online have been identified, along with the identification of over 3,000 disinformation narratives against Russia. These threats pose challenges to the integrity and security of information in the country.

Social media platforms’ over-enforcement is flagged as a major problem for media and journalists, with many legitimate news sources having their accounts suspended or restricted. This issue is particularly prevalent in cases involving conflict settings, such as Palestine and Afghanistan, where the presence of dangerous organizations contributes to heightened enforcement measures.

The complexity of platform rules is highlighted as a concern in conflict settings. In such situations, rules can be confusing and easily violated, with typical infractions including the posting of images depicting dead bodies. This observation sheds light on the challenges faced by content creators and users as they navigate restrictive guidelines during conflicts.

Addressing misinformation requires the implementation of upstream solutions, as highlighted by Maria Risa. This approach focuses on addressing misinformation at its root causes, rather than solely addressing its dissemination. By focusing on upstream solutions, it is possible to create more effective strategies to combat misinformation and its harmful effects.

The analysis raises questions about the design of platforms and the role of algorithms and business models in managing information. It suggests the need to reconsider and possibly redesign these aspects to ensure fairness, accuracy, and accountability in content dissemination. This observation emphasizes the ongoing need for innovation and improvement within the digital landscape.

BSR, a leading global organization, provides a toolkit for companies on how to conduct enhanced human rights due diligence in conflict settings. This initiative aims to promote the respect and protection of human rights, even in challenging circumstances. The toolkit, developed in collaboration with Just Peace Labs, offers detailed guidance, making it an invaluable resource for responsible business practices.

Furthermore, the analysis advocates for human-centered approaches in digital transformation, particularly in conflict zones. Stakeholder consultation can be challenging in war zones, highlighting the importance of ensuring that the interests and needs of all individuals are considered and that no one is left behind in the process.

There is a noted lack of focus on countries like Afghanistan and Sudan in discussions surrounding these issues. This observation emphasizes the need to broaden the scope of discourse and pay equal attention to conflicts and human rights violations occurring in these regions.

Global media platforms play a substantial role in shaping public opinion, primarily through their recommendation algorithms. However, concerns arise regarding the impartiality and bias of these algorithms. The analysis reveals that global media platforms often alter their recommendation algorithms to favor one side in informational wars, despite presenting themselves as neutral. This highlights the potential influence and manipulation of public opinion through these platforms.

Given the significance of global media platforms, the analysis argues that global society should exert more pressure on these entities. Increased accountability and transparency are necessary to ensure that these platforms operate in an unbiased and fair manner, considering the critical role they play in shaping public discourse.

In conclusion, the prevailing trends in content governance pose a threat to freedom of expression and fundamental rights. Exploitation of content governance during times of crisis, the need for international laws addressing informational warfare, and the over-enforcement by social media platforms are among the challenges highlighted in the analysis. The complexity of internet restrictions and the design of platforms also warrant further consideration. Additionally, the importance of upstream solutions, human-centered approaches, and the inclusion of marginalized regions in discussions emerge as key insights. Efforts towards increasing platform accountability and transparency are crucial to safeguarding a fair and unbiased digital landscape.

Session transcript

Chantal Joris:
You Good afternoon, everyone, all the participants in the room, and also good morning, afternoon or evening for those who join online. My name is Chantal Joris. I’m with Freedom of Expression organization, Article 19, and I will be moderating the session today. In today’s session, we want to explore some of the current challenges posed to the free flow of information, specifically during armed conflicts. And I want to start with making a couple of opening remarks as to where we are at. And we do know that conflict parties have always been very keen to control the narrative and shape the narrative during conflicts, perhaps to garner domestic and international support, to maybe portray in a favorable light how the conflict is going for them. And of course, also often to cover up human rights violations and violations of international humanitarian law. So this is nothing new, yet what has changed, of course, is how armed conflicts look like in the internet age. We see an increased use of digital threats against journalists. and human rights defenders, mass surveillance, content blocking, internet shutdowns and even the way that information is manipulated has become much more sophisticated with the tools that parties have available today. And of course at the same time, civilians really rely at an unprecedented level on information communication technologies to keep themselves safe, to know what’s going on during the conflict, where fighting takes place and also to be communicating with the people, with their loved ones and see that they are okay. And also I want to emphasize a little bit that these issues are not necessarily limited to just sort of the top 5 to 10 conflicts that tend to make the headlines but there are currently about 110 active armed conflicts in all regions of the world and also beyond conflict parties, even states that are not part of the conflict have to grapple with questions, for example we’ve seen recently should they sanction propagandists, ban foreign media outlets so this is really an issue that concerns every state, all states and the whole world and also what we have seen is that digital companies have become increasingly important actors as well in conflicts and they do need to find strategies to avoid to become complicit in human rights violations and violations of humanitarian laws. So to discuss some of these challenges I’m very happy to introduce the panelists of today also I do want to make a quick remark in this context that we notice that many of our partners from conflict regions have not been able to come to IGF in person and have these discussions in person although we talk a lot about the need for an open and secure internet, including of course during conflicts, and they are often the stakeholders that are most affected and they are not really able to join these discussions except online. Similarly, some of our speakers, most of our speakers on this topic that we really wanted to have at the table are also joining us online today. The first speaker joining us online is Tetiana Avdievjeva. She is Legal Counsel at the Digital Security Lab Ukraine, an organization that has been established to address digital security concerns of human rights defenders and organizations in Ukraine. We also have Khattab Hamad, an independent Sudanese researcher focusing on digital rights and internet governance, who is working with the Open Observatory of Network Interference and the Code of Africa. We have Joëlle Rysk joining us. She is Digital Risks Advisor at the Protection Department of the International Committee of the Red Cross. And next to me here in person is Eleni Hickok. She is Managing Director of the Global Network Initiative, of which Article 19 is also a member. I will also introduce what this multi-stakeholder initiative is all about. Also, we were supposed to have here Irene Kahn, Special Rapporteur on Freedom of Expression. Unfortunately, she had to be in New York at the same time in person, and we were struggling to remove her from the program, so apologies for that. But she has been focusing on these questions as well, and I encourage you to read also her report from last year on disinformation in armed conflicts, and she continues to engage in this discussion as well. So, a quick breakdown of the format of the session. So, we have… We have about 75 minutes to discuss these challenges. I will address a couple of questions to the speakers, but it is really meant as an interactive discussion, it is meant to be a roundtable, so I will also be asking some of the questions to you as well, after the speakers have been able to express themselves on the issues, so throughout the discussion, and then at the end also there will be a chance obviously to give input also what we might have missed, what open questions there are for the speakers. So perhaps let’s start with discussing the main digital risks that we see, and also the risks to the free flow of information during conflicts, and I will first have Tatiana from Ukraine and Khattab from Sudan talk about this, but then also again I will be very keen to hear from you what, in your areas of work or from the regions you are from, what you have been observing as sort of the key challenges in this respect. So Tatiana, if I can start with you.

Tetiana Avdieieva:
Yeah, hi everyone, and it’s my great pleasure to be here today and to talk about such an important topic. So first of all I wanted to share a brief overview of what is going on in Ukraine currently, regarding the restrictions on free speech, free flow of information and ideas, which were introduced long before the full-scale invasion, since the war in Ukraine started in 2014 with the occupation of Crimea, and after the full-scale invasion as a rapid response to the changing circumstances. So basically restrictions in Ukrainian context can be divided into two parts. The first part concerns the restrictions which are related to the regime of the martial law and derogations from the international obligations. And the second part relates to so-called permanent restrictions. For example, there is a line of restrictions based on origin, particularly concerning Russian films, Russian music and other related issues. Also, there are restrictions serving as a kind of follow-up of Article 20, for example, prohibition of propaganda for war, prohibition of justifications of illegal aggression, etc. The problem is, especially with the restrictions which were introduced after the full-scale invasion, that restrictions drafted in a rush are often poorly formulated and therefore there are lots of problems with their practical application. However, what concerns me the most in this discussion is the perception of the restrictions of the kind by the international community. The problem often is that people don’t take into account the context of the restrictions. And when I’m speaking of the context, it is not only and purely about missiles flying above someone’s head. It is about the motives which drive people to be involved into the armed conflicts. And that is a very important reservation to be made at the very beginning of this discussion, because we have to speak about the root causes. And I often make this comparison for me, armed conflicts can be compared to the rules of saving energy, that armed conflicts do not appear from nowhere and they do not disappear anywhere. So when, for example, a certain situation starts, we have to understand that there are motives behind the aggression on the side of the aggressor. And therefore we have to work with those motives to prevent further escalation and to prevent repetition of the armed conflict, to prevent re-escalation basically. In this case, assessment of the context is, unfortunately, not a basic math, it is rather a rocket science. Because for example, in Ukrainian context, the preparation of the fertile ground for propaganda for Russian interference has been done in the information space for at least the last 30 years of Ukrainian independence, when on the entire European level it was said that Ukraine is not basically a state and that there is no right to sovereignty and that was basically a gift to Ukrainian nation, that all the representations in front of international community from the side of the post-Soviet countries were done by Russia, etc. What does it mean? It means that there was a particular narrative which was developed and narrative with which we have to work. Why this is important? Because usually restrictions are treated, I would say, rather in vacuum. So we are trying to apply the ordinary human rights standards to the speech which is shared, developed, to the narrative which is developed in the context of the armed conflict. And it is very important because at the very end of the day, what any country which is in the state of war faces is the statement that as soon as the armed conflict is over, all the restrictions have to be lifted. And here we miss a very important point, the point about the transition period, so-called exit strategy, which is very frequently substantiated by automatic cancellation of the restrictions. And that actually is a part of the discussion on the rebuilding of Ukraine in terms of reinforcing the democratic values, re-establishing human rights which were restricted, etc. So at this particular point, it is very important to mention that we have to think about the transition period of lifting the restrictions from the very beginning of the armed conflict. Because when the restrictions are introduced, we have to understand that they cannot end purely when there is a peace agreement. Otherwise, it won’t make any sense from the practical standpoint because narratives will still be there in the air. Therefore, we have to develop this exit strategy and understand that post-war societies are very vulnerable towards any kind of malicious narrative. And they cannot be left without protection even after the end of the war. And finally, a brief overview of the digital security concerns. I will try to summarize it in one minute, not to steal a lot of time. Currently, there are lots of problems from the digital security side. For example, there are attacks on databases, attacks on media, which not only target the media as website for sharing information, but also target the journalists, which is more important because people experience chilling effect and they’re super afraid of sharing any kind of idea because they potentially might be targeted. Indeed, I mean, from the side of the aggressor state because currently in Ukraine, at least in Ukrainian context, the biggest threat is stemming from Russia, especially for those journalists who are working on the frontline and who can be captured, who can be tortured, who can be killed. And there were lots of examples of such things happening. Also, there is a problem of DDoS attacks on websites, which actually interrupts the work of the websites and disables the sustainable connection. There were attempts to share Melbourne spyware again in order to track in the videos, in order to check what- that they’re working on and in order to prevent the, basically the truth to be distributed to the general public. And finally, there are coordinated disinformation campaigns on the social media, on like platforms, messaging services, including Telegram, which is another important topic and probably this topic is a topic for the separate discussion. So I won’t be stopping on that, like for my entire speech, but just mention it for you to understand that this discourse is very extensive and there are lots of things to talk about. I will stop here. I will give floor back to Shantala. Thank you very much to listen to me and we’ll be happy to share further ideas in the course of the discussion.

Chantal Joris:
Thank you very much, Tatiana. Khattab, if I can bring you in and have you share also your observations about the situation in Sudan, also following the recent outbreaks of hostilities a couple of months ago.

Khattab Hamad:
Thank you, Shantala. Hi, everyone. So I wanna welcome you and the other participants and it’s really an honor for me to speak at the IGF. So to keep the attendees updated, Sudan is going through a war between two forces that have been allied since the year of 2013. And the disagreement came to an end on April 15th due to differences in the security agreements related to the unification of the armies in Sudan. So this put the Sudanese in a position in a bad position due to the parties to the war, because the parties of the war are not following the laws of war, in addition to its impact on basic services, including electricity and communication. So this contributed to widespread manipulation of war narrative and the spread of misinformation in addition to the intense polarization. So to answer your question, in Sudan right now we have international downs, we have targeting of telecom workers, we have also disinformation campaigns and also we have privacy violation. And unfortunately, these practices are used by both sides of war, not only one side like RSF or the Arab Support Forces, RSF or Sudanese Armed Forces, the official military. So regarding internet disruption, internet disruption is not a new experience for the people in Sudan. The authorities used to shut down the internet during exams and civil unrest. And this time, due to the ongoing conflict, there were numerous and periodic internet disruptions in Khartoum, the capital of Sudan, and the cities of Nyala, Zalingei, and Al-Junaina. These events are considered as effort of information control during the war. However, some disruption cases in Khartoum are related to security concerns of… the telecom engineers and other telecom related workers as they may face violence because of their movement towards maintenance. So the absence of internet connection opened a wide door to applying this information as people cannot verify information that they got from local sources. Moreover, this information during the conflict also exists in cyberspace and it has several actors, but there are two main players here. They are SAF, the Students Armed Forces, and RSAF. Both parties are using proxy accounts and influencers on social media platforms to promote and to propagate their narrative regarding the war. Actually, this practice puts civilians at risk because getting wrong information may impact their decision to move around their neighborhood or the decision of displacement. Moreover, what I observed is that this information is threatening the humanitarian response. So, for example, the ICRC office in Sudan posted on Facebook warning the people not to follow, do not follow this information. So also during this war, several privacy violation cases happened, such as physical phone inspection, a lot of cases of physical phone inspection by soldiers from both sides. And also the use of spyware. Actually, we couldn’t verify. the use of spyware until now, but there are claims of that. But the important thing here is we have to mention that RSF imported the predator spyware of Intellexa. And Intellexa is an EU-based company that is providing intelligence tools. And also, this is not the first time of using spyware in Sudan. The NISS, the National Intelligence and Security Service, imported the remote control system of the Italian company hacking team in 2012. So I think that’s it from my side, Chantal. Back to you.

Chantal Joris:
Thank you very much. And thank you also for this account and explaining how these information threats can also really lead to offline violence and concrete harms to civilians. So same question to the people in the room. What have you seen or what have you perceived as being the main, in your experience, the main risks to the free flow of information, be it through surveillance, propaganda, internet shutdown? What’s your perspective?

Audience:
Hi. Thank you so much for a great presentations. I’m Alishka Birkova from Access Now. And we are also working on the issue of content governance in times of crisis. And we have been recently mapping a number of prevailing trends in the field that, in one way or another, are not necessarily related to the content governance. And we are also working on the issue of content governance in times of crisis. And we have been recently mapping prevailing trends in the field that, in one way or another, put either freedom of expression and other fundamental rights in danger. And we looked specifically. at this issue from the perspective of international humanitarian law and so we are witnessing several issues especially parties to the conflicts that are actually very much in this instigators of those. One of them is of course the intentional spread of disinformation as a part of worldwide tactic where we noticed number of cases that we are now so we have these different case scenario that we are supporting with the case studies too that really happen in the field such as for instance claiming or warning that there will be invasion taking place and in reality this invasion has never occurred there is a very specific example from Israel in 2021 where even international media were convinced and believed that this invasion take place and reported on it which was just a part of military strategy and there are a number of other examples from different regions around the world where we see that. Another one is of course using platforms for the purpose of moving the parts of population from one territory to another which from the perspective of international humanitarian law is not at least in the context of non-international armed conflict it’s not even permitted and so we see those cases as well. Of course the whole entire issue of the content depicting prisoners of war that was very largely reported and that can again put in danger the privacy of those people identity and so and so on so the safety and security of those individuals depicted on that video content that is being shared and there may be other two or three case scenarios that we identified in the field and that we are now still gathering case studies and this will be all summarized in our upcoming report that we’re hoping to publish in following weeks I don’t want to overcome it but I am happy to elaborate further without going much in and give space to others as well.

Chantal Joris:
Thank you very much for the excellent points. Anyone else?

Audience:
Thanks for giving me the floor and opportunity to speak and express myself. I’m Tim from Russia and what I can say about internet shutdowns, internet restrictions in terms of conflict, it’s pretty obvious that any country involved in the conflict will ensure that there will be some restrictions on internet websites, media and so on, but frankly speaking it is not so restricted as it could be seen from abroad as long as there are plenty of like, you can stop information from flowing around through like telegram messengers from some social media and stuff and lots of Ukrainian media and Ukrainian telegram channels are still and effectively available in Russia So I can’t say there is a super restricted environment in the Russian media sphere. So far we face lots of, as the same as Ukrainian speaker said, we face lots of like cybersecurity threats coming obviously from Ukraine the same way, like denial-of-service attacks, like some sophisticated attacks on governmental and non-governmental like private web services companies and we have lots of like data leaks, for example recently Ukrainian hackers published a leaked database from the company who was a service provider for all the airline tickets and airline connections and stuff. So basically all the imaginable personal data including like names, dates and all the flying information of Russian people, Russian citizens was published in the internet, in the telegram and was available for any malicious actors and so far we see a lot of threats and insecurities from disinformation campaigns and threats and fakes which are used as a weapon in the informational war happening aside of real war in between Russia and Ukraine. And it’s so sad that this kind of informational war and this kind of weapons and weaponry used in informational war is not described in any international law and is not even somehow imagined and prescribed what’s, because there is, you know, it’s, the station is like that. There is, say, international law for wars, for real wars and for real warfare, but there is no international laws for informational warfare, and both of the countries and both of the, like, all the citizens of our countries, both Ukraine and Russia, suffer from this internet warfare. So the situation is like that. So the situation is like that, that both parties use this kind of weapons in the informational war in between our countries. For example, for this year, working in a, like, non-profit organization, which is, which focuses on countering disinformation and fakes in Russia, we have found more over 3,000 disinformational narratives, threatening Russian Federation and Russian citizens in some different ways. And this is about, like, number of narratives, but separately, we have counted each, like, post and message in social media, and the number of messages and posts and reposts placed in social media, an overwhelming 10 million copies in the Russian media sphere.

Chantal Joris:
Thank you. I think there will be probably a. quite some disagreement in the room and also I will let Tetiana perhaps respond and react to some of the remarks. Certainly there is a gap in international law as to how to deal appropriately with information manipulation actually both in times of peace and in times

Tetiana Avdieieva:
of armed conflicts. I don’t know if we have any… Yes. Yeah, just a brief response. First of all, I find it particularly interesting when the discussion around the incitements to aggression, propaganda for war and incitements to hatred turns into the discussion around the disinformation campaign spread inside Russia which for me is slightly a shifting of the context because when we are speaking of the aggression issues per se, we have to take into account the narratives which are primarily aimed at actually instigating their own conflict and also narratives which are shared inside Russia connected to the, for example, inviting people to join in Russian armed forces or connected to actually incitements to commit illegal activities which predominantly are shared in Russian media especially those which are state-backed. Also as regards the digital security threats and digital security concerns, what concerns me the most is the attempt to basically substitute the actual topic of harming civilians and the topic of basically trying to suppress activists, opposition, human rights defenders and journalists by the fact that there are restrictions which affect the entire community in Russia. First and foremost, because among the Russian community itself, there is an extensive support towards the invasion. Even Russian independent media Meduza, it’s in its findings and its research stated that from 70 to 80% of Russian citizens actually support the invasion. When assessing the restrictions in this context, the proportionality analysis, in my opinion, would a little bit differ comparing to the situation when we are just declaring the like facts without providing the appropriate context for some. So I will stop here and I won’t probably create the bottle out of this discussion here. But I think that it’s a very important topic to clearly define the things we are talking about and to clearly indicate in which context they’re done to whom they’re attributable and what are the specific consequences of the actions which are taken and what is the reasoning behind those actions which are taken. Thank you.

Chantal Joris:
Thank you. Hello? Yes, thank you very much. As mentioned, when we go to the factual scenarios of specific conflicts, for sure, there can be a lot of disagreements as to what specifically the issues are. I will take one more contribution and then I will, I will, and then let’s hear from Joel Risk from the ICRC.

Audience:
Hi, I’m Rafik from Internews here. This may be more of a niche issue. potentially, but one of the biggest frustrations that we hear from our media and journalist partners particularly, though also from civil society, is around over-enforcement from social media platforms where legitimate news reporting or commentary on conflict is taken down and legitimate news sources have their accounts suspended or restricted from amplifying or boosting content. Sometimes it’s through automation in cases like Palestine or Afghanistan where you can’t report on the news without mentioning dangerous organisations. We find a lot of media outlets wind up getting their pages restricted, and then other times it’s through mass reporting and targeting of these news sources that result in incorrect, having their pages taken down. Sometimes people do actually violate the rules of the platform too, maybe posting pictures of dead bodies and things like that that do violate the rules, but in a conflict setting it’s often complicated. So yeah, just in terms of the free flow of information, that’s another issue. Thank you, yes, absolutely. I mean, also promoting a certain narrative or sharing

Chantal Joris:
violations for propaganda purposes, for example, is obviously something very different than reporting on them to make them publicly known, but given how often automated tools are also involved in content moderation, it’s very difficult to make that distinction properly. Joelle, let me… turn to you and perhaps ask you as well hearing from from the situation in in ukraine and sudan does that um is that also what the the sort of threats that you that you have perceived globally as a humanitarian organizations and and what sort of specific risks um has the icrc identified in terms of how these digital threats can harm civilians

Rizk Joelle:
thank you shantan and thank you for just you know contributions they make to ukraine uh sudan i will maybe focus a little bit more on the harms to civilians to us um rather than on the nature of the uh of the threats so because of course our concern is not only about the use of digital technology but also that the lack of access to to that especially to connectivity particularly when people need reliable information the most to make potentially life-saving decisions the we share the information dimension of conflict that also becomes i’m sorry we have a little bit of you’re breaking up a little bit i don’t know if there’s anything uh i don’t know if it’s the connection or if there’s anything you can do with the with the mic that will make it a little let me change the mic setting um is it better like that okay yes i see you nodding oh okay all right great thank you sorry i it was a mic setting i believe um so i was saying that the information dimension of conflict have also become part of in a way of digital front lines um because digital platforms are used to amplify um a spread of harmful information at a wider scale reach and speed than than we’ve ever seen before and that is a concern because it compromises people’s safety their safety, their rights, their ability to access also these rights and their and their dignity. And this, the difficulty is that this happens in various ways that are very difficult to prove that Jenna spoke of attribution a little bit. It’s very difficult indeed to even not only to do that but also to prove how harmful information is actually causing harm to civilians affected by conflict and I’ll try to speak about that a little bit. And I see that different actors, whether they are state or non state are leveraging the information space to achieve information advantages, you, you had said earlier, but also to shape public opinion shape the narrative, the dominant narrative, and but also to influence people’s beliefs, their interests and their, their behaviors, which is where in situations of conflict is really becomes an issue of risk, potentially to other civilians. The information space in that sense is an extension of the conflict domain, and impacts people that are already in a vulnerable situation because they’re already affected by conflict. So, the digitalization of communication system then becomes basically a convergent of the information and digital that being said, not all harmful information and distorted information, whether it’s misinformation this information, malinformation and hateful information, not all of it is a result of organized information operations right, not all of it is state sponsored, but the use of digital platforms, really have has a mix of state and non state actors and to an organized spread of narratives but also an organic spread of information and harmful information. And what I’ve seen in the past years. And maybe also just to caveat on that that makes it very complex from humanitarian angle. Again, to identify to detect that that is a harmful narrative but also to assess what is the harm to that to the civilians and then to think of an adequate response to these complexities that I just mentioned. And what I’ve seen in the past years is that how countries affected by armed conflict. countries the spread of misinformation and disinformation, and also hateful and offensive speech can already aggravate tensions and can intensify conflict. dynamics, which of course will have a very important toll on civilian population. For example, harmful information can increase pre-existing social tensions, pre-existing grievances. It can also even take advantage of pre-existing grievances to escalate social tensions and exacerbate polarization, violence, all the way to a point where it’s a disintegration of social cohesion. Information narratives can also encourage acts of violence against people or encourage other violations of humanitarian law, and you already mentioned quite a few examples. Alishka also mentioned a couple of examples. The spread of misinformation and misinformation can increase vulnerabilities to those affected by conflict. The distress, the psychological weight it can cause, which is often invisible. For example, think of how harmful information may feed anxiety and fear and also mental suffering of people that are already under significant distress. We fear that the spread of harmful information can also trigger threats, harassments, which may lead to displacement and evictions, and I think a couple of examples were already given in the room. We also worry about stigmatization and of discrimination. Think of survivors, for example, of sexual violence. Think of families that are thought about as belonging to one or the other of a group or one or the other an ethnic group, for example, where they may be stigmatized about people being denied access to essential services as a result as well, only because they belong to a group that is subject to an information campaign or a narrative. We also fear that distorted information in times of emergencies and people’s ability to access potentially life-saving information is heavily compromised today. People may not be able to judge what information they can trust, at what time when time when they really need accurate and timely information for their safety and for their protection. For example, to understand what is happening around them, where danger and risks may be coming from, roads that are open or not safe or not, locations of checkpoints, et cetera, and how and where they may find assistance, whether it’s medical or other types of assistance, or take measures and make timely decisions to protect themselves or to even search for help. So the digital information space can also become a space where behavior that are counter to international and humanitarian law may occur, including, and I will not give contextual examples, including the incitement to targeting of civilians, to killing civilians, making threats of violence that may be considered as terrorizing the civilian population, but also information campaigns, whether they are online or offline, and I would like to underscore online and offline, can also disrupt and undermine humanitarian operations. Hattab spoke a bit about that, but I wanna say that when this happens, when undermining humanitarian operations may also hinder the ability to provide these humanitarian services to people that are most in need for it, and of course also compromise safety of humanitarian aid workers. One last point I’d make on this is that even the approaches that are adopted to address this phenomenon in themselves, and Chantal, you mentioned that in the beginning, it may also, intentionally or not, impact people’s access to information. It may fuel crackdown, more surveillance, more tracking of people, crackdown on freedoms, on media and journalists, and of course also on political dissent and potentially also on minorities. So as a humanitarian actor, we believe that this isn’t the issue that requires. a bit of a specific attention, not only because of the implication it has on people’s lives, their safety, and their dignity, but also because of how complex the environment is. And from that angle, a conflict-sensitive approach will be necessary. We’re used to discussing a lot on the impact of disinformation, for example, from a point of view of public health campaigns, election campaigns, freedom of speech, et cetera. When it comes to conflict, a conflict-sensitive approach will be necessary. In other words, an approach that really helps us ask how to best assess the potential harm in the information dimension of conflict, and also how that may have impact on civilians that are already affected by several other types of risks, mostly offline. And of course, think of adequate responses that will not cause additional harm or amplify harmful information, whatever the type of that information will be. And happy, of course, to talk a little bit more about that and how it connects to other risks later in the hour. Thank you.

Chantal Joris:
Thank you very much, Joelle. I do find this point very interesting about, as a freedom of expression organization, we look at something like disinformation, obviously, through the lens of the human rights framework and the test to apply to restrict freedom of expression. But it’s interesting to think about it from the perspective, again, of the potential harm, what are the adequate responses, and whether they are the same as the ones we would identify normally as a freedom of expression organization, as the adequate responses to this information that do not have any unintended negative consequences. With that, let me move to Elanai. So, I know that some GNI members are obviously telecommunication and internet. service providers or also hosting platforms and so I’m just curious to hear like what discussions have you had at the GNI specific to two conflicts and perhaps can you talk a bit about what pressures have companies reported to be facing if they operate in these conflicts from from the conflict parties

Speaker:
yeah sure thanks Chantelle and thanks for the opportunity to be on this panel maybe to start just to say GNI is a multi-stakeholder platform working towards responsible decision-making in the ICT sector with respect to government mandates for access to user information and removal of content we bring together companies civil society academics and investors and all of our members commit to the GNI principles on freedom expression and privacy and our company members are assessed against these principles in terms of how they are implementing them in their policies and their processes and their actions and we also do a lot of learning work and and policy advocacy and so as part of our some of our learning work we started a working group on the laws of armed conflict to examine responsible decision-making during times of conflict and the challenges that many of our member companies were facing and then we are also holding a learning series organized by GNI ICRC and CIPRI which is meant to be an enable and honest conversation around the ways that ICT companies can have impact and be impacted in the context of armed conflict and that’s really you know to say that I’m I’m coming to this conversation as GNI not really being or not necessarily being an expert in IHL or working in times of armed conflict but we are trying to bring together the right experts ask their write questions and have the conversations that are necessary to help companies and other stakeholders navigate these really complicated situations. So I think to answer your question, Chantelle, as we’ve heard from a number of our speakers today, armed conflicts are really complex and there is a lot at stake. Technology companies may offer services that support critical functions, provide critical information for citizens, but they can also be used to directly or indirectly facilitate violence, spread false information, potentially prolong and exasperate conflicts. And that’s just a few of the potential impacts. There are a number of different risks that companies may need to navigate during times of conflict and they often have to take difficult decisions that require the balancing of a number of stakeholder interests. This includes risks to people, individual users, journalists, vulnerable communities, societies. As well as navigating risks to a company, including its infrastructure, services, equipment, but probably most importantly, their personnel. And especially for telecom companies who have offices on the ground, often their personnel are at risk. And I think companies may need to navigate a whole range of questions about if they operate in a context and what that impact might be. I don’t think it’s a clear-cut answer. They, on one hand, may be providing access to critical information, they might be a more rights-respecting alternative, but they also might be used to facilitate the violence. They have to navigate questions about how they operate and function during times of conflicts, including how they’re responding to government demands. These can take many different forms, including requests for access to user information, giving access to networks for surveillance purposes, shutting down the networks, carrying messages on networks, removing content, and more. more. I think that we’ve seen that these demands may be informal. The legal basis for the demand may be unclear. The duration of the measure being required may not be specified. For example, it might not be clear when a network shutdown should be ended. The scope of the demand may be extremely broad. And I think something that was said by another speaker that’s important is that these demands can come from both sides of a conflict and not just one government. And so I think as companies manage risk to people and their company, their ability to respond to government mandates in other ways that might be available to them during times of peace can be really limited. For example, during a time of peace, you could say a company should request clarity of the legality of the request and communicate with the government to determine exact requirements. They should be responding in a way that is minimal, refuse to comply, partially comply or challenge a request through legal channels, disclose information about receiving the request to the public or notify the user, maintain a grievance mechanism when the privacy and freedom of expression of users is impacted by complying with the request. But I think in times of conflict, as they face these different risks that they have to manage, it can be really difficult for them to undertake these measures. And I think just from discussions that we’ve heard, things that are useful include companies having risk management frameworks in place, clear escalation channels, clear thresholds to understand what triggers different actions, working with other actors to understand the legality of requests, working with other companies to coordinate actions in a specific context, and importantly, engaging with experts, including to understand the implications of different decisions and ensuring formal and constant review of decisions on how to improve their actions going forward. And I think another challenge that we’ve heard in our discussions is that it can also be challenging to understand when to pull back or to de-escalate different measures that are in place because it’s not always clear when a conflict ends.

Chantal Joris:
Thank you very much and I do also really support in in these contexts that the necessity of a multi-stakeholder approach because perhaps say the ICRC might not be an expert classically with content moderation or maybe not yet maybe that’s still to come ISP providers are not necessarily experts in conflict settings they don’t maybe understand both of them maybe don’t understand the typical threats around this information so I do think it’s extremely important that different actors work together. Let me go back to to Tetiana maybe focus this sort of second half of the discussion a bit more on trying to identify gaps where we need more clarity and and also have Tetiana and Kattab speak to the role of ICT companies specifically in the context of their

Tetiana Avdieieva:
conflict. Tetiana over to you. Yeah thank you very much and I particularly liked how the discussion is currently going I mean what I wanted to briefly follow up and maybe start the discussion around how the ICT companies how platforms generally have to respond is that we have to make the clear distinction when organic spread of harmful information turns into spread of actually illegal content and probably this line has to be specifically identified for the context of our own conflict where the effect of the organic harmful information is amplified by the very context in which it is put. As regards the ICT platforms, for me, since in Ukraine there is no actual mechanism to engage with the platforms on the state level, in terms of we do not have jurisdiction over most of tech giants, and that creates the biggest problem, because there is no opportunity to communicate with the platforms otherwise, except for the voluntary cooperation from their side. That is probably the biggest challenge we as an international community have to resolve, because usually states which face armed conflicts or which face civil unrest, and we can expand this context even to other emergency situations, they do not have the legal mechanisms to communicate with the platforms, and that is the primary stage for the discussion. We have to understand when companies have to respond to the governmental requests, to governmental requests of which governments the companies have to respond, especially when there is suspicion or when we actually know that the government, for example, is authoritarian one, when the government has, the state generally has, the very high index of human rights breaches, whether the companies have to be involved into the discussions with such governments, with such states at all. So that is the primary point probably we have to think about. The second thing is to what extent IHL and IHRL have to collaborate when we are speaking about the activities of the ICT. For example, and I can share the link. in the chat, our organization Digital Security Lab Ukraine has done an extensive research on disinformation, propaganda for war, international humanitarian law, international criminal law and international human rights law. There is a big discourse about what are the definitions, which legal regime is applicable and how the states generally and international community have to react when these kinds of speech is delivered. With companies, it is even more difficult just because for them, they’re rather, and I mean, I can absolutely understand why it happens. They’re rather waiting for international organizations. For example, the UNESCO, the OSCE, the Council of Europe to say, well, there is a reason incitement to genocide. Whether the threshold is reached or not. And that is actually point like, it’s a big plus hundred to multi-stakeholder collaboration. Because there are certain actors which are empowered, which are put in place to say, to call particular legal phenomena by its own name. We have to understand that like, I mean, I wish I could say that there are incitements to genocide in what Russia does in Ukraine. But unfortunately, domestic NGOs won’t probably be the most reliable source and the most trustworthy source in this case. So that’s the point in time when international organizations have to step in. I mean, both international, intergovernmental organizations, international NGOs who can elaborate on those issues. And that might be a potential solution how ICT companies might deal with the prohibited types of content. The prohibited kinds of behavior, which is usually called coordinated innocent behavior online. So… most probably they need assistance on the more global level, as well as assistance on the local level in order to better understand the context. For example, when we are speaking about the slur words, most probably it is more reasonable to resort to the assistance of the local partners. And finally, it is about the issue of enforcement. And here, my main point at any discussion is that we are usually trying to, unfortunately, we are trying to blame and shame companies which are already good phase one. For example, we are constantly pushing Meta to do even more and more and more. And it is nice that Meta is open to a discussion. But on the other hand, we have such companies as Telegram, as TikTok, which are more or less reluctant to cooperate, or in case of Telegram, they’re absolutely closed for cooperation with either a government or civil society. And we also have to solve this issue in particular, because there is a big problem of people migrating from the safe spaces, which are moderated, but have certain gaps in moderation, to the spaces which are absolutely unmoderated, just because people feel over-censored in the moderated spaces. And this over-censorship is often caused by our blaming and shaming strategy. And the very same approach has actually been seen when Meta, for example, was blamed by the increased moderation efforts in Ukraine. I mean, it is good that the ICT companies finally started to do something. And our main task is not to blame and shame them for not doing the same in other regions, but rather to encourage them to apply the very same approach in all the other regions and situations, to develop crisis protocols, to think about… to initiate basically discussions about IHL and IHRL perspectives, to say like publicly what kind of problems they face, probably to launch the public calls for cooperation when local NGOs can apply, when local NGOs can themselves engage with content moderation teams, policy teams, oversight teams in case the ICT company has any. So that’s my main point probably to all the actors involved that when we see the pattern of the good behavioral pattern on behalf of the ICT company, we have to encourage them to expand this good behavioral pattern to other contexts rather than to shame them that they acted in this way only in one situation.

Chantal Joris:
Thank you very much. And I do echo the calls on companies to take all situations of conflicts equally serious and not focus on the ones more that tend to make headlines or where there’s bigger geopolitical pressures behind. So also over to Kataab, then I have two last questions for Elonay and Joël. If you can keep your interest in the topic, if you can keep your interventions relatively short. So we have a couple of minutes also for any questions for the audience that will be appreciated. Kataab, over to you.

Khattab Hamad:
Thank you Chantal and thank you Tatiana for the great intervention. So I will start with the challenges that the ICT companies face during the conflict in Sudan to be specific. So the major challenge that the ICT companies are facing in Sudan during the war is electricity, to be honest. Before the war. the national grid of electricity was only providing 40% of the citizens with power. And after the war, it’s clear that there was a huge shortage in power supply. And this impacted the network stability, the network, by network, I mean the telecom network, not the power network. And the data centers availability, which affected the e-banking service in Sudan and other basic governmental services. However, the ICT companies normalized the power shortage by equipping the devices, stations and data centers with uninterruptible power supply, as well as UPS and power generators. But due to the circumstances of the war, as I mentioned earlier, the companies could not deliver the fuel to the power generators because of security concerns of the workers. So this led a company like MTN Sudan, MTN Sudan, it’s an ISP in Sudan. It led MTN to announce that they had service failure due to the disability of delivering the power fuel. And I will translate to the role of social media platforms in the ongoing conflict. So social media platforms, actually, they played a major role in ousting the National Congress Party of Sudan, which was ruling Sudan for 30 years. And also it assisted us in our pro-democracy movement. But however… These platforms are the main tools of opinion manipulation during the ongoing conflict, as both conflict parties are using these platforms to promote their narrative of the war. However, the new event here is that there is a foreign actor, which is playing a major role in the cyberspace in Sudan, which is META. META took down the official and other related accounts of Rapid Support Forces, and they justified that by saying RSF is considered a dangerous organization, according to the Middle East Eye website. And yes, I confirm that RSF is a dangerous organization, and we know its human rights record and how it’s bad. But this step from META contributed to the efforts of SAF to control the information and the narrative of war, as nowadays only one way of information. You can get information from SAF while RSF is suppressed. My concern is that, yeah, both sides are bad, but making a free environment of information, and then people can get the information that they want, and they can filter by themselves, not making decisions that contribute indirectly. to prolonging the war, and also assisting in the process of polarization. So taking a decision without considering the local context is a big mistake, as I also have another concern, as RSF itself was a part of SAF, as SAF founded RSF in 2013. So it makes sense that both are dangerous organizations. How can you take down one organization and leave the other? Also, the decision impacted the free flow of information. So for example, fact checkers cannot find information to provide verification to the claims, as there is one way of information, and it also has security impact on the people on ground. So there are some gaps that I want to raise, and I think it should be filled. So in this era, the right to access information is related to cyberspace. So the front liners of accessing the information are the telecom workers, the telecom engineers, and other telecom-related workers, because they are the people who provide and operate the infrastructure which allows us to access information. Those workers should be considered by the international law to be extraordinarily protected, like doctors, journalists, and the human rights defenders. Moreover, in Sudan we need more and more training for our people because unfortunately we don’t have enough human resources to grow our internet governance and their knowledge is limited to specific people. And unfortunately these people are using their knowledge to restrict the free flow of information and freedom of expression. And also we have to amend our laws like the Right to Access Act, the Cyber Crimes Law and the Law of National Security as they were being abused using victims by the same people who have this knowledge. So I think that’s it from my side, Chantal and others, back to you Chantal, thank you.

Chantal Joris:
Thank you very much. Yeah, it’s interesting, we’ve heard now twice of these complications around ICT companies potentially sort of de facto asked to choose sides between the parties to a conflict, also like Elonay mentioned earlier. And also I think very interesting point about the key importance of the staff that is in charge of keeping these ICT systems going and perhaps them needing even specific protections to be able to do that. Elonay, so the GNI does refer to the guiding principles on business and human rights, which are key also to the GNI principles as to how companies should respect human rights. They only make very brief reference to humanitarian law, so maybe just an open question as to do you feel that there is a sense from companies that they need more guidance as to what it means for them to respect humanitarian law in addition to human rights?

Speaker:
I mean, yes. I think that is very central to a number of conversations that happen at GNI. I guess I would say so many technology companies approach risk identification and mitigation through the lens of business and human rights. And this includes relying on frameworks such as the OECD guidelines for multinational enterprises and in the UN guiding principles like you just mentioned. And I wanted to highlight that there are a couple of relevant principles and parts of the commentary of the UNGPs for companies and states with respect to operation in conflict affected areas. Importantly, according to the UNGPs, a core principle of the corporate responsibility to respect human rights is that in situations of armed conflict, companies should respect the standards of international humanitarian law. And then also the UNGP state that when operating in areas of armed conflict, business should conduct enhanced due diligence, resulting from potentially heightened risk and negative human rights impacts. And there’s emerging guidance from civil society organizations on how companies can undertake this EHRDD through a conflict lens. I think IHL can help inform tech companies operating in situations of armed conflict about the risks that they might expose themselves, their personnel, as well as other people too. But like you mentioned, I think that more guidance is needed as to how due diligence processes can incorporate IHL, as well as more work can be done on the articulation as to what IHL means for ICT companies.

Chantal Joris:
Thank you very much. Joelle, as the main guardian of IHL, I know the ICRC is looking into some of these also legal and policy challenges that have arisen through these cyber threats. And can you talk a bit about this global advisory board which has? supported the ICRC in addressing some of those. Can you perhaps share some of the initial findings?

Rizk Joelle:
Of course. Would you like me to focus more on ICT companies since that’s where the discussion went? Yes, yes, sure. Okay. So yeah, thanks. It’s a good question, Chantal. The ICRC has set up a sort of a global advisory board about two and a half years ago. So between 2021 and 2023, we brought together at a high level, really at senior level, advising the president and the leadership of the ICRC on basically experts from legal, military policy, tech companies, and also security fields to advise on the emerging digital threats and new digital threats, and to help us improve our preparedness to engage on these issues, not only with parties to armed conflict, but also with new actors that we see are very, that play a very important role in complex situations, including, of course, civil society, but also tech companies. So for these, throughout these two years, we’ve hosted about four different consultations with the advisory board, and hopefully next week on October 19th, we will publish the list of discussions and recommendations. They’re not ICRC recommendations. They won’t be ICRC recommendations, but they will be the advisory board recommendations on digital threats to civilians affected by armed conflict. So I will maybe broadly mentioned the four different trends that were discussed in these consultations between the global advisory board and the ICRC, and then I will focus a little bit on the recommendations linked to the information space, and then to ICT companies. And I’ll try to be quick, because I’m aware of time. So the first trend that was discussed between the ICRC and the global advisory board. board is the harm that cyber operations have on civilians during armed conflict. So focusing again on the emerging behavior of parties to armed conflict in the cyber space, but also other actors in that space by disrupting infrastructure services and data that may be essential to functioning of society, but also to human safety. And there we consider that there’s a real risk that cyber operations will indiscriminately affect widely used computer systems that are connecting and connected civilians and civilian infrastructure, but in a way that goes beyond the conflict. So as a result, it may interrupt access to essential services, but also hinder the delivery of humanitarian aid and cause, of course, offline harm and injury and even death to civilians. The other issue or the trend that was discussed is the question that we are discussing today, and that is connectivity and the digitalization of communication systems and the spread of harmful information. And similar to what we already discussed at length in this session, recognizing information operations have always been part and parcel of conflict, but the digitalization of communication systems and platform is amplifying the scale, reach, and speed for the spread of harmful information. And that, of course, leading to distortion of facts, influencing people’s beliefs and behaviors and raising tensions and all what we have already discussed, but really stressing that the consequences of this is online as well as offline. The third issue discussed, and this is really an issue that we hold very close to heart as the ICRC, and that is the blurring of lines between what is civilian and what is military in the digital dimensions of conflict. And seeing that civilians and civilian infrastructure becoming more targets of attacks in that. space in the digital dimension of conflict. And of course, this is an issue that is of growing concern as digital front lines are really expanding, and they’re expanding also, let’s say, conflict domains. The closer that digital technologies move civilians to hostilities, the greater the risk of harm to them. And the more digital infrastructures or services are shared between civilians and military, the greater the risk of civilian infrastructure being attacked, and of course, as a consequence to that, a harm to civilians, but also undermining the very premise for the principle of distinction between civilians and military objectives. And finally, of course, not by any way the least important, the fourth issue, very important to us as a humanitarian actor and to all humanitarian organizations, the way in which in the cyber domain, cyber operations, data breaches, and also information campaigns are undermining the very trust that people and societies are putting in humanitarian organizations, and as a result, the ability to provide life-saving services to people. So some of the recommend, of course, the board had 25 recommendations. I will, of course, not go through them now, but I will invite you to have a look and read that report that will be launched on October 19th. I think it’s really a beginning of an important conversation between multiple stakeholders in that field. I will maybe speak a little bit on the recommendations in relation to information, to the spread of harmful information, and maybe after listening now to you, I will also add a few on recommendations specific to ICT companies. So, of course, in addition to recommendations on parties to respect their international legal obligations, but also assess the potential harm that their action… and policies are causing to civilians and taking measures to mitigate or prevent that. This is, of course, a broad recommendation. But more specifically, a recommendation to states to build resilience and societies to build resilience against harmful information in ways that uphold the right to freedom of expression, protect journalists, and really improve the resilience of societies. And by a resilience approach, we, of course, understand that this is a multiple stakeholder approach that also involves civil society and companies alike. So thinking about it as a 360-degree approach to addressing the information disorder. Another recommendation to the platforms is recognizing the fact that a lot of this misinformation, disinformation is spreading through social media and digital platforms and calling on them to take additional measures to detect signals, analyze sources, analyze methods of distribution, different types of harmful information in contextual approaches to managing that and analyzing what may exist on their own platforms in this context. But particularly in relation to situations of armed conflict, I think Khattab’s example is a classic example of the importance of contextualizing these policies. And that these policies and procedures, including when it comes to contact moderation, as Khattab mentioned, should also really align with humanitarian law and human rights standards that Shantag also have mentioned. And, of course, lastly on that is a recommendation to us and to humanitarian organizations at large to strive to detect signals of the spread of harmful information, but also assess the impact on people. And keeping in mind that any responses to harmful information does not or must not amplify harmful information in itself. or cause additional or other unintended harm. And of course, a call to contribute to, again, the resilience building of affected people in conflict settings. If I still have a couple of minutes, I’ll maybe just mention some of the recommendations to ICT companies that are at large and more linked to the cyber domain and not necessarily to information operations or harmful information. And some of these recommendations include the segmentation of data and communication infrastructure between what is providing military purposes and those that are used by civilians. So segmentation of communication infrastructure where possible. Also awareness of risk for companies and awareness of the legal consequences around their role and their action and the support they may provide to military operations and private clients as well. And that awareness of the consequences that their involvement and the use of their products and services in situations of conflict may have. Also ensuring that restrictive measures that may be taken in situations of conflict, sanctions or others related to sections or other or self limitations as well, do not impede the functioning and maintenance of medical services and humanitarian activities. And of course the flow of essential services to the needs of civilian population. I’ll stop here. Thank you Chantal for giving me the opportunity to elaborate on that.

Chantal Joris:
Thank you very much. I know we’re basically out of time, but I do wanna, before we get kicked out, see if anyone has something they would like to add, something that you think has been missing from the discussions and should be taken into account by the people working on this, or questions of course also to the speakers if they can stick around. around for five more minutes.

Audience:
Yeah, thank you. My name is Julia. I work for German Development Corporation and I would have one question. Yesterday morning, Maria Risa said we need more upstream solutions for this information topic. And we heard a lot now about more downstream solutions, so content management, taking down certain profiles, et cetera, et cetera. So my question would be, what are your views about questions of design of platforms? So why do we talk, how do we talk about redesigning algorithms, business models, et cetera, and what your perspectives are on these aspects? Thank you.

Speaker:
I mean, I would just say that I think it’s really important that companies start to build in the capacity to apply a conflict lens to the development of their products. And I know that ICRC, for example, is working on building and working with companies to build out this capacity. So I think we have to consider both upstream and downstream solutions.

Chantal Joris:
Cut up, Joelle, Tatiana, do you wanna come in on this question quickly?

Rizk Joelle:
I will just say very briefly, it is in line with a 360 degree approach, of course, that involves not, I mean, in the upstream thinking that the very business model is reinforcing in a way, the way that these policies can be enforced. So from that angle, I would tend to, of course, agree, but realistically, I think this would be a very challenging discussion that also requires expertise that may not be in the hands of those that are currently conducting that feedback loop with the tech companies.

Chantal Joris:
Thank you very much. I will perhaps see if there’s. Any other quick questions in the room? Yes, go ahead.

Audience:
Hi, I’ll be super quick. Lindsay Anderson from BSR. For those who don’t know, we help companies implement the UNGPs and conduct human rights due diligence. And I just wanted to flag a resource that might be useful for folks on this topic. About a year ago, we published a toolkit for companies on how to conduct enhanced human rights due diligence in conflict settings, which we developed alongside Just Peace Labs and other organization. And it’s very detailed, obviously targeted to companies, but it might be useful for those who are advocating with companies who wanna understand under the UNGPs specifically what they should be doing and what enhanced human rights due diligence looks like in practice. So if you Google BSR, conflict sensitive due diligence, you’ll find that resource. Hi, I’m Farzaneh Badi. So I’m working on a project related to USAID, and they are looking at human-centered approaches to digital transformation. And they want to know and understand what’s, how it can look like, and how can they can actually engage with the local communities when they are doing this actual digital transformation work. And some, one part of that is dealing with crisis. But the challenges that we see in human-centered approaches and human rights analysis is that in, especially in countries that are in war zones, getting in touch with the communities and receiving their feedback and have that kind of stakeholder consultation is extremely difficult. And I want to know if there are actual recommendations out there. And also, how can we use these mechanisms, these human rights mechanism, human-centered approaches to not to leave anyone behind? Because we are not talking about Afghanistan anymore. And like this is maybe, so thank you so much for this session because I’ve been thinking about Sudan and I’ve been thinking about Afghanistan and how sanctions affect them and how they’re in crisis. But in this meeting, we need to talk more and more about them so that they won’t be forgotten. So thank you for this session, but also like the recommendations to get in touch with the community and kind of address their needs as well. When we are doing the digital developments and after that, during the crisis, that would be great, thank you.

Chantal Joris:
Thank you very much. And I know a lot of material has been mentioned that will come out and then some of them, I think also focuses on stakeholder engagement, but I think you’re absolutely right. There is still a lot more to be learned and improved. So I mean, if anyone has anything in this sense to offer. Yes.

Audience:
Yeah, thank you for giving me a space for I want to support Tatiana’s words and I think that international society should do more pressure on global media platforms because they basically control what people think with their recommendation algorithms. Facebook actually can do a revolution in the click by altering the news feed in some social accounts like in some country. So that we analyze that and we see that global media platforms are extremely unsupported as they’re extremely against publishing their recommendation algorithms. and it was mentioned before that some global media platforms take sides in the informational war happening all across the globe and that’s like some bad condition because they are tend to be neutral because like there is no bad and good side there’s like side A and side B in every conflict and we see that global media platforms tend to take side to tend to alter recommendation algorithms for the profit of one of the war sides but they are not doing it publicly so they try to shadow it out so they pretend to be non-biased and neutral but they are not so I think that the global society and here I support Tatiana for 100% should do more pressure on global media platforms globally thank you so much yes thank you very much and I do think I mean there have been long standing calls around more transparency when it comes to the recommender systems we’ve had digital services act just adopted in the EU let’s see if this will bring improvement and I know that Eliska has strong views on this as well I mainly wanted to since a couple of us mentioned several resources so together with article 19 you kindly co-drafted and so Tatiana actually the joint declaration of principles on content governance and platform accountability in times of crisis we did not manage to come up with a shorter title this is still documented it’s available on our website it’s a joint effort of number of civil societies that have either first hand experience with crisis or similarly to access now an article 19 have global expertise in this area, and I think there are a number of, even though it’s a declaration, we still managed to put together 10 pages of relatively, at some instance, detailed rules for platform accountability. The declaration, why am I mentioning it? It is specifically addressed to digital platforms that find themselves and operate in the situation of crisis. It has different recommendations for what should be done prior to escalation, during the escalation, and post-crisis, emphasizing correctly, as the speaker from GNI mentioned, that there is no clear end or starting point of any crisis. So there are a couple of detailed rules without going into the details. The document was launched at the IJ last year, so it’s already one year old, but I think some important principles and rules can be found in there that can serve at least as a guiding light. Thank you.

Chantal Joris:
Thank you so much. I’ve been told to close, also perhaps to say that Article 19 is also working on two reports, one specific to propaganda for war and how it should be interpreted under the ICCPR, and the other one also trying to identify and address some of these gaps that exist when it comes to the digital space and armed conflict. So as you can tell, a lot more material is coming out, still not enough quite yet, or it’s just the start of a process. So thank you to our excellent speakers, Roel, Tatiana, Kataaf, Elunay. Thanks, it was a pleasure to have you. And thank you for everyone in the room and online who participated. And we will be speaking about this topic for years to come, for sure. Thank you so much. Thank you.

Audience

Speech speed

162 words per minute

Speech length

2320 words

Speech time

861 secs

Chantal Joris

Speech speed

157 words per minute

Speech length

2388 words

Speech time

911 secs

Khattab Hamad

Speech speed

118 words per minute

Speech length

1460 words

Speech time

743 secs

Rizk Joelle

Speech speed

157 words per minute

Speech length

3062 words

Speech time

1168 secs

Speaker

Speech speed

169 words per minute

Speech length

1338 words

Speech time

476 secs

Tetiana Avdieieva

Speech speed

150 words per minute

Speech length

2699 words

Speech time

1080 secs