Beyond North: Effects of weakening encryption policies | IGF 2023 WS #516

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis of the speakers’ arguments regarding encryption reveals a variety of key points and perspectives. Overall, there is a prevailing negative sentiment towards anti-encryption policies due to the potential risks they pose.

One notable concern is the risk of fragmentation in encrypted services. The mention of various policies that could threaten end-to-end encryption in Europe, the UK, and the USA raises alarms. Additionally, the gathered public views indicate a potential fragmentation in the encrypted services offered. This fragmentation could disrupt the seamless communication and interoperability that users currently enjoy.

The extraterritorial effects of anti-encryption policies are another significant concern. The internet ecosystem and human rights can be affected if encrypted applications become region-specific, leading to a fragmented online environment. There is also anxiety surrounding the possibility of surveillance or the implementation of backdoors to encryption. These concerns highlight the potential infringement on privacy and human rights.

Furthermore, there is strong opposition to anti-encryption policies and the possibilities of surveillance and backdoors, with some participants expressing the belief that the issue of child pornography is being weaponised to strengthen these policies.

Contrary to the compliance or denial dichotomy that is often presented, there are alternative solutions to consider. Technological literacy can empower individuals to access platforms in different ways within a given jurisdiction. This undermines the notion that compliance or denial are the only options. The false dichotomy of compliance or denial is seen as limiting, and there is optimism that technological literacy can pave the way for innovative approaches.

Blockchain technology is suggested as a positive solution that could provide a global, interoperable network without relying on encryption. The launch of a standardised blockchain platform by the MLS from the ETF is cited as a successful example. By preventing the monopoly of a single entity on a billion people, blockchain could offer a decentralised solution that empowers users while ensuring security and functionality.

The potential consequences of removing encryption services, such as WhatsApp, are also highlighted. Many migrants and citizens with families in other parts of the world rely on services like WhatsApp to communicate. The removal of these services could potentially oppress diasporic populations and the global South, as they were among the earliest users and audiences. This raises concerns about reduced connectivity and the potential disruption of social bonds.

Additionally, the dominance of Facebook and Google in the technology sector is seen as an issue that limits competition. The disproportionate influence and control exerted by these companies are believed to hinder opportunities for other players in the industry. The audience perceives this dominance as detrimental to fair competition and the exploration of alternative technological solutions.

In conclusion, the analysis reveals a negative sentiment towards anti-encryption policies. Concerns about the fragmentation of encrypted services, the potential infringement on privacy and human rights, and the dominance of certain tech giants echo throughout the discussions. However, there is a strong belief in the possibility of alternative solutions, such as technological literacy and blockchain technology, to address these issues. The potential impact on diasporic populations and the global South is also a significant factor that adds to the urgency of preserving encryption services.

Pablo Bello

This analysis explores the various arguments and stances surrounding encryption and its implications for online safety. It specifically focuses on WhatsApp, a widely used encrypted messaging platform, which strongly opposes anti-encryption policies and regulations. WhatsApp argues that weakening encryption would pose a significant threat to the security and privacy of its users on a global scale.

One of the main concerns raised by WhatsApp is the risk of internet fragmentation that could arise from policies undermining encryption. The platform has even stated that it would consider leaving the UK if the proposed online safety bill is implemented in a way that compromises encryption. This highlights the potential implications of lower security standards for the interconnected global network.

On the other hand, some experts argue against the commonly perceived trade-off between safety and privacy. They assert that the idea that reducing privacy will automatically increase security is false and disproven. They suggest that it is essential to maintain a balance between both aspects rather than compromising one in favour of the other.

Discussions also focus on who should decide encryption standards and whether multiple protocols should be encouraged. Questions have been raised regarding the potential implications of having a single standard and the decision-making process involved. Careful consideration must be given to ensure that the best standards are implemented.

Importantly, WhatsApp collaborates with other companies and civil society groups to resist encryption regulations. This coalition is working together to avoid regulations that would impact encryption. WhatsApp actively advocates for maintaining the highest standard of protection with end-to-end encryption worldwide, emphasising its duty and responsibility to protect its users.

Furthermore, the analysis underscores the critical nature of encryption for society’s safety, with encryption being vital to protect millions of individuals, particularly those in the Global South. The argument against weakening encryption is supported by the belief that it does not make society any safer.

In conclusion, this analysis presents a range of arguments and stances on encryption and its impact on online safety, with WhatsApp taking a strong stance against anti-encryption policies and regulations. The platform highlights the potential risks of internet fragmentation and advocates for the protection of high encryption standards. It actively opposes encryption regulations and emphasizes its duty to protect users worldwide. Encryption plays a crucial role in ensuring online security and privacy, and finding the right balance between safety and privacy is essential.

Juliana Fonteles da Silveira

The analysis explores the impact of anti-encryption policies on human rights in the Americas. It reveals that these policies may infringe upon the rights protected by the American Convention on Human Rights by granting access to personal information and allowing its processing. This argument is presented with a negative sentiment, highlighting concerns about the potential repercussions of anti-encryption policies.

The analysis also highlights that anti-encryption policies extend beyond privacy concerns to broader implications for freedom of expression and human rights. It asserts that encrypted communication is essential for activists, protesters, and journalists to communicate securely. It further points out that in many countries in the Americas, the absence of the rule of law, judicial independence, and democratic stability exacerbates the impact of these policies. This negative sentiment reflects a critical stance towards the rise of anti-encryption policies.

Another viewpoint discussed in the analysis raises concerns that anti-encryption policies in the Americas and Europe may contribute to the introduction of new repressive internet regulations in the region. The argument is made that any restriction on expression must meet the three-part test, which requires it to be provided by law, based on legitimate reasons, and in line with the principles of necessity and proportionality. This underscores the belief that anti-encryption policies should be subject to scrutiny to ensure compliance with human rights principles.

The analysis also mentions the role of the Inter-American Commission on Human Rights, which issues reports and recommendations on the impact of non-encryption policies on human rights to states and private companies. It also facilitates public hearings for civil society and other actors to address violations related to encryption legislation. These observations are presented in a neutral sentiment, highlighting the involvement of international bodies in addressing potential human rights violations resulting from anti-encryption policies.

Additionally, the analysis notes that there is no substantial evidence to support the effectiveness of non-encryption policies in ensuring safety. This negative sentiment raises doubts about the necessity and proportionality of such policies.

On the other hand, there is a positive sentiment expressed in support of encrypted and protected private communications. This viewpoint aligns with the importance of upholding human rights, privacy, and encryption. Although no supporting facts are provided for this support, it reinforces the notion that safeguarding private communication is crucial.

Overall, the analysis emphasizes the significance of protecting human rights while considering encryption policies. It underscores the potential consequences of anti-encryption measures on various facets of human rights, including privacy, freedom of expression, non-discrimination, and human dignity. The analysis calls for a careful examination of these policies to determine their compatibility with international human rights standards.

Prateek Waghre

During the discussion, several issues regarding internet regulations and digital sovereignty were explored. The focus was specifically on the impact of Western institutions on the global South. It was argued that actions taken by Western institutions can have significant consequences for parts of the global South, and this sentiment was expressed in a negative manner.

The discussion highlighted concerns regarding the regulations imposed on digital services in India. These regulations include the retention of customer data for up to five years, the tracing of original messages on end-to-end encryption in accordance with Indian internet laws, and the broadening definition of Telecommunication, which may impose licensing requirements on all internet services. These measures were viewed negatively by the participants and were seen as potentially harmful.

The exportation of regulatory designs to other countries was also a key topic of discussion. It was noted that countries such as Malaysia, Vietnam, Kenya, and Venezuela have adopted similar regulations to those enacted in Germany, including intermediary liability regulations. This observation was made without expressing a particular sentiment, indicating a neutral standpoint.

The control exerted by the Indian government over digital spaces raised concerns among the participants. The Data Protection Act, which grants the state the power to process large amounts of personal data while exempting the state from the right to privilege, was mentioned. Additionally, potential obligations to intercept messages and the involvement of a state-appointed grievance committee in content moderation decisions were seen as alarming. These points were discussed in a negative light and raised concerns about potential reductions in digital inequalities.

The importance of protecting encryption and end-to-end encryption was positively emphasized. The need for solidarity in safeguarding encryption in the coming years was underscored, highlighting its significance in preserving privacy and security.

The discussion also touched on the traceability issue in India, particularly in relation to a high court traceability order that WhatsApp managed to obtain a stay for. The impact and implications of this order were presented in a neutral manner, suggesting the need to cautiously monitor developments in this area.

Overall, the participants advocated for the protection of encryption and stressed the importance of vigilance in monitoring traceability developments. They highlighted concerns over the regulations imposed on digital services in India and the potential exportation of similar regulatory designs to other countries. The need for solidarity in protecting encryption and the significance of observing the traceability issue were recurring themes throughout the discussion.

Masayuki Hatta

The discussion focuses on the impact of encryption on the economies of the Global North and Global South. It acknowledges that many individuals are using encryption without being aware of it, highlighting the need for greater awareness and education about these services. The lack of understanding could lead to problems if encryption is prohibited or removed.

Regarding Japan’s role in encryption regulation, it is noted that the country generally follows the policies of the Global North. This raises questions about whether Japan should be categorized as part of the Global North or Global South. The discussion is further complicated by Japan’s technical and socio-political characteristics, causing confusion about its position. Additionally, Japan’s inclination towards authoritarian tendencies may have implications for privacy and internet freedom.

The Global South is increasingly implementing regulations on encryption. However, it is important to recognize that technology is not constrained by geographical boundaries or regulations. Services like WhatsApp, Apple, and Signal remain accessible to users regardless of regulatory measures. This demonstrates that technology is universal and not limited by individual countries’ decisions.

The main point of the discussion appears to be uncertain and confused. There is a lack of clarity regarding the focus and objectives of the conversation. Additionally, the issue at hand is seen as multifaceted and political, contributing to the uncertainty surrounding the discussion.

In conclusion, the analysis underscores the importance of raising awareness and understanding among users about encryption. It highlights the potential problems that could arise if encryption is prohibited or removed. The discussion also raises intriguing questions about Japan’s role and position in encryption regulation. Moreover, the differing approaches taken by the Global North and Global South in regulating encryption reflect the evolving landscape of technological governance. Overall, the discussion necessitates further exploration and clarification of the main points and objectives.

Mariana Canto Sobral

The analysis examines various aspects related to encryption, privacy, and global trends. One key argument posits that global south countries often feel compelled to conform to trends set by the global north out of necessity or a perception of trendsetting. However, the analysis cautions that transnational regulations like the General Data Protection Regulation (GDPR) can impose compliance requirements on global south countries, risking exclusion from the market. This highlights the challenge these nations face in balancing global norms with their own interests.

Regarding privacy, the analysis emphasises that its definition and understanding are primarily shaped by Western, white, middle-class perspectives. As a result, privacy is seen as a privilege, disregarding the experiences and needs of marginalized groups. The historical example of people of color being obligated to carry lanterns for surveillance further illustrates how such perspectives perpetuate inequality and social injustices.

Encryption evokes mixed sentiments. While it is viewed as a threat to vulnerable groups, it is also recognised as a powerful tool that can benefit the underprivileged and address power asymmetries. Challenging the prevailing notion that encryption hinders protection calls for a reevaluation of the narrative surrounding its role.

The analysis also disputes the notion that the absence of privacy automatically leads to increased security. It suggests that alternative approaches should be explored to achieve a balance between privacy and security.

Moreover, the analysis asserts encryption as a matter of human rights, emphasising the importance of protecting it as a fundamental right that contributes to peace, justice, and strong institutions. It calls for the Global South to embrace and safeguard encryption instead of perceiving it as a threat.

Additionally, the analysis recommends implementing regulations and policies to strengthen encryption in Latin America. It cites the example of Brazil, where revelations by Edward Snowden led to the establishment of a comprehensive civil rights internet framework. This demonstrates the potential positive impact of proactive measures in addressing encryption.

In conclusion, the analysis underscores the complex dynamics surrounding encryption, privacy, and global trends. It highlights the need to challenge prevailing narratives, redefine privacy, and recognise encryption as both a potential threat and a valuable asset. The analysis stresses the importance of safeguarding encryption as a human right and implementing appropriate regulations to promote security and reduce inequalities.

Moderator

Multiple legislative proposals introduced in the Global North, specifically in the USA and EU, are causing concerns about the potential negative impact on end-to-end encryption. These proposals, including the Online Safety Act and the Kids Online Safety Act in the USA, as well as the Chat Control proposal in the EU, pose a threat to the privacy and security provided by encrypted services.

There are fears that these proposals could lead to a fragmentation in the availability of encrypted services worldwide. If certain encryption services are only accessible in specific regions, it could create a situation where some users have access to secure communication while others do not.

Adding to the complexity is the lack of awareness among users about their reliance on encryption in everyday tech usage. Many people are unaware that they are using encryption when using applications like WhatsApp or LINE. This lack of awareness makes it difficult for users to understand the value and implications of restricting encryption.

The potential consequences of these legislative measures extend beyond privacy concerns. There are significant worries that these policies could curtail human rights and freedom of expression. Weakening encryption poses a risk of reducing global standards of security and privacy, especially impacting vulnerable populations who already live under non-democratic regimes.

It is crucial to defend against regulations that weaken encryption as encrypted communication plays a vital role in protecting freedom of speech, privacy, and security globally. The absence of data protection laws in many countries in the Americas contributes to state abuses and enhances state capacity for arbitrary measures on private communications.

Furthermore, there is no evidence to suggest that mass surveillance enabled by non-encryption policies has been effective in ensuring safety in proportionate ways. The potential for abuse in surveillance policies and the infringement on privacy rights is a major concern.

In conclusion, the legislative proposals threatening end-to-end encryption in the Global North have sparked concern for the global internet ecosystem and human rights. Defending encryption is crucial as weakening it not only compromises privacy but also threatens human rights and freedom of expression. The lack of awareness about the role of encryption among users further complicates the understanding of its value and implications. Protecting encryption is essential for maintaining higher standards of security and privacy globally, especially for vulnerable populations.

Session transcript

Moderator:
I’m going to start with a brief introduction. I’m going to start with a brief introduction. We have a panel which is about encryption and the impact of encryption policies happening in the global north and the impact they have on the global south. We have a room with approximately, I would say, about five, six, excluding the speakers. Thank you for all being here. My name is Olaf Kollekman, I’m with the Internet Society and I will be your moderator today. As I said, this is about encryption policies in the global north, impacting the ability to communicate throughout the world. We have a room with about five, six, excluding the speakers, thank you for all being here. Since 2022, there have been a number of legislative proposals introduced that threaten end-to-end encryption. End-to-end encryption is the ability to communicate with confidentiality and integrity from one user to the other. This is a very important issue. We have a number of bills that are circulating in the European Union, the online safety act and the kids online safety act, all bills that are circulating as proposals and being looked at, I would say. In the European Union, there’s a proposal of chat control. The online safety bill has become an act. to come up with a solution to find harmful content, while a community of security researchers and practitioners have brought consensus that such a solution doesn’t quite exist. During the development of the bill, that’s the online harms bill, various providers of encrypted services announced already that if that bill would come in effect and actions would be taken that the bill enables, that they would take their business elsewhere. And these are all laws that focus on specific regions of the world. As I said, Europe, the UK, the US, those are very specific areas of the globe, but their effects are felt all over the place. Because of course, we know the Internet is global. This panel, we put together, that is a number of us, to assess how these measures, when introduced, impact other regions of the world, and in particular, the global South. Now, we have an excellent panel, expert panel to discuss this, consisting of five people, two of them here, three of them online, again, showing the global nature of this discussion. We have with us Juliana Fontelles, who is sitting next to me. Juliana is a consultant for a special report on freedom of expression of the Internet. She is also a researcher at Interlabs and a project assistant at the Brazilian Association of Investigative Journalism. Welcome. We also have Masayuki Hata. Masayuki Hata is a researcher at the Interlabs and a project assistant at the Brazilian Association of Investigative Journalism. Welcome. We also have Masayuki Hata. We also have Masayuki Hata. Masayuki Hata is currently an Associate Professor of Economics and Management at the Tsurugaidai University of Japan. Sorry if I butchered that name. And you were originally trained as an economist and organizational theorist, and you write and speak extensively on intellectual property issues. But you also have, I think that is a hobby, which is very much related to encryption, and that’s as a contributor to the Tor product project and other privacy-enhancing technologies. Online, we have a number of speakers, contributors. We have Mariana Canto, and she is a visiting researcher and Chancellor Fellow at the Berlin Social Science Center in Germany, Director of the Institute for Research and Law and Technology of RECIF, IPREC, in Brazil, PhD candidate in law at the University of Stirling in the UK, where she is part of the Interdisciplinary Cluster on Democracy, Human Rights, and Communication Advocacy in the Digital Age. Further, we have Pablo Bello. He is also online. Pablo is the Director of Public Policies at WhatsApp for Latin America. He graduated in Economics from the University of Chile. an MBA in business from Esada in Barcelona, in Spain. He worked at the Inter-American Association of Telecommunication Companies, where he held a position of Executive Director. He was also Chile’s Assistant Secretary of Telecommunication between 2002 and 2006. Welcome. Pratik Waghre, I actually, I hope I’m not butchering your name, Pratik is a Policy Director at IFF, he’s a technologist turned public policy professional. And Pratik has spent nearly a decade in the CDN industry as a consultant and product manager. Since moving to public policy, his research work has focused on a number of areas such as internet shutdowns, information disorder in the information ecosystem and governance of digital communication networks and social media in India. Pratik is also an alumnus of the U.S. State Department’s International Visitor Leadership Program on disinformation in the Quad. So those are the speakers today, and you are the audience, and I expect a little bit of engagement. If all is well, we are about to share, Marcos is about to share, Marcos Pereira, who is online, is about to share a QR code to a Mentimeter board. And if all is good, that will appear there. So we have a bunch of questions, just to heat you up, participate with us, grab your phone, and if you cannot scan the QR code, type in menti.com and enter the code 6831 2810. I repeat, 6831 2810. 2810. And we are offering you a bunch of questions that we hope you can add and we hope we get some insights from those questions. What is the risk of fragmentation in encrypted services offered? Have and have nots. Or talk to or talk not to. We see responses coming in and I’m going to wait a while so that people also online can participate. This was a thing at the IGF. How do we make things interactive? So this is our experiment here. I haven’t seen it in other rooms yet. Well, thank you. I think we leave it at that. High would be 10 of course and low would be zero. And I think that what we see here as a question or as a result is that people think that there is indeed a risk of fragmentation in encrypted services offered. Make an example of what is fragmentation. That’s what I mean with have and have not people who are able to use encryption services and people who are not able to use those services or services that only exist in a particular region. So there is an encrypted application that is only available in Latin America and you cannot talk to me in Europe. That would be fragmentation for me. So people are still entering their numbers. I find it interesting that there is some low votes. Perhaps we come back to that in the Q&A and I will ask you to raise your hand if you were one of the low voters, like you don’t think there is fragmentation at all, or barely, and explain why you think that. I’m actually interested in that answer and we’ll pick that on the question and answer section if I don’t forget to return to that. I believe we also have a second question on the sheet, we have three I believe. How can the internet ecosystem and human rights be affected by the extraterritorial effects of anti-encryption policies from other countries? So for instance, the Online Safety Act, how can that impact other jurisdictions? And I believe that this country has a number of words that you can fill in. It’s always interesting to see what comes out, I’ll wait a few seconds. Okay let’s see what the result is. Now we see the word cloud building, that is what is happening. I don’t know what a surveyed surf dom future is. Ah, I think I know what it is, yes. A life in servitude, perhaps. The program doesn’t allow you to write about the normal circumstance. Ah, okay. And what does it mean? I’m going to ask you then what you mean by that sentence.

Audience:
I mean, if we are allowing surveillance or back-to-server encryption at this moment in time, it’s a slippery slope. And I think everyone knows that. And the whole discussion on child pornography is just used. I mean, it’s one good reason why you would want that, but it’s used to weaponize anti-encryption policies around the world.

Moderator:
Thanks. I think we have one more. Just have a look at this. I think this is not a very positive image, if I may summarize it as that. Do we have yet another question? No, the third one is the one that we finished the workshop, so it’s the same question as the previous one.

Masayuki Hatta:
See if we had some change in the opinion. So let’s ask our panel a couple of questions. So Masayuki, starting with you, the Internet has become essential for public and private services. Everybody is essentially using the Internet. And in countries such as Brazil and India, encrypted services such as WhatsApp and Signal, Telegram, all those type of services, and not only in Brazil and India, are being used by big and small companies to run their businesses. And, of course, we’re using Brazil and India because they are very, very big countries. you. How do encryption policies from the Global North impact Global South economies? It’s a very, very difficult question for answer because I live in Japan and I think, I’m not sure about Global South. I have been to India or Brazil and I think in Japan, many people don’t know about encryption or more specifically, many people don’t know they are actually using encryption usually, which means, for example, in Japan, so many people, almost every people are using application, smart phone application called LINE. LINE is something like WhatsApp. So I’m pretty sure in India or Brazil, Global South, WhatsApp is widely used and in Japan, LINE is very widely used and LINE has a protocol called letter sealing. It’s end-to-end encryption protocol. So I think one big issue before we think about the foreign influence or something to Global South is actually, I’m not sure how many Global South people know they are using encryption already. I’m not sure, is this the answer to your question?

Moderator:
But suppose that people are using, they are using encryption without knowing it, but still if they are not able to use encryption and And the confidentiality that is now offered is taken away from them. What do you think would be the impact?

Masayuki Hatta:
So I mean they don’t know they are using encryption already, so they might not be aware that when encryption is prohibited or gone, I think that’s one of the problem I guess we face, because we or they don’t know the true value of encryption without knowing it. And I think that’s a situation in my country or global south.

Moderator:
Thank you. I think we might return back to that. Mariana Kanto, let’s see, where are you? You are online. The current geopolitics shaped by the history of colonization and new forms of dominance reflect the perspective of the global north, at least that’s something that we hear often. Not to dismiss that in the way that I said that a minute ago. In your evaluation, how do the power dynamics of the global north impact the construction and development of cybersecurity policies in the global south? Is the tech dominance, the colonial nature, the colonial history, does that play a role in all of this? Over to Mariana.

Mariana Canto Sobral:
Thank you so much for the invitation first, it’s a pleasure to be here, it’s always an honor to be at the IGF, and I thank to ISOC and I also thank to IPEREC for the invite. In relation to the question, I think global south countries, they tend to follow general trends that happen in the global north. Sometimes due to necessity, for example, transnational regulations such as the GDPR, and because if you don’t adhere to this kind of regulation, you’re excluded from the global market. Other times due to what is perceived as a global trend, for example, the import of narratives produced in the global north. For example, in the case of encryption by law enforcement actors. However, those import of narratives, especially this letter case, are very dangerous when you have open discussions happening in the judiciary, for example, as the case in Brazil. For those who don’t know, I’m from Brazil, and we’ve been following the recent discussions in the judiciary since the shutdown of WhatsApp in 2015 and 16. So when you import those narratives and during those kind of discussions happening at the same time, those dilemmas can be tricky and can harm encryption many times. So I think it’s also important that those measures such as the weakening of encryption can have extraterritorial effects in relation to the application of the law in other countries such as the UK now act, but also in relation to the import of those narratives. So the extraterritoriality is not only in relation to the application of the law, but also how the narratives work around the globe. So, for example, even if the country does not choose to adhere to that certain regulation, presidents can make a risk for encryption to exist in a certain country. I talked about the online safety bill, but I also now act, but we’ve been following on the kinds of regulations such as European ones that are being developed and that are very much influenced our bills on fake news, for example, and our AI strategy, too, that’s following the AI act in Europe. It’s impossible to talk about encryption. Latin American, I would say, without talking about power symmetries and the regulation of encryption, regulation itself. It’s impossible to talk about law without connecting to the real world because regulation does not operate in isolation. We know that. It needs the real world to function. Otherwise it’s just useless text. And in relation to privacy, I question sometimes the privacy concept that we use. According to some experts like Paya Aurora, we’re gonna have a legislation that’s based on privacy concepts, that, for example, still consider privacy, related to attitudes of Western-based, white, middle-class groups. So in this case, privacy is a privilege of many. And we can see this not only now, but during over the years and centuries and centuries, such as when Simone Brown talked about the lantern laws in the US, that you would have people being obligated, people of color being obligated to carry a lantern with them in order to be surveilled. So privacy is a privilege, and I believe that still nowadays it’s a privilege of the very few people in the world. And the awakening of encryption tends to even accentuate this kind of power symmetry. Today, we have a very, very serious agenda that’s being discussed, which is child sexual abuse material online. And it’s a very, very difficult matter to address. And even more, when you see that survivors and victims are not being heard in most discussions, that’s because when we talk about encryption, when we talk about regulation, we still consider children as unable or lacking in agency. And that for me, it’s a very relevant matter in order to include those voices in the debate. And not only the victims and the survivors, but also the people that work with those subjects and those people. So, I think it’s very important to understand the power symmetry not only in relation to the Global South and North, but also in relation to the people who are connected to the issue, in this case, children versus law enforcement authorities. But as the discussion is also connected to Global North and South, we can see that the Global South countries are still being highly affected by policies that are not made by them, and they are still perceived as unable to enforce rights, between quotes, I would say, and as many times open-air laboratories for highly intrusive technology. We’ve been following lately after the Pegasus case, and the investigation of the Pega committee, that highly intrusive technology is being used and exported to countries in the Global South with the permission of the Global North in countries that defend human rights. So, even this kind of, is this regulation enough to protect the Global South? That’s my question, like sometimes I wonder if our notion of privacy is enough to protect all of vulnerable groups in the world. And unfortunately, I don’t have many answers, I have more questions than answers, but I would love to debate this questioning of the status quo, let’s say, and how we can bring our region to the center of the debate and to be heard. And it’s such a very important debate for all of us. I think that’s it, thank you so much.

Moderator:
Thank you, Mariana. If I may, so I’m looking up because the screen is there. If I may ask a clarifying question. In the context of, you were just talking about questions around the status quo and I was trying to understand how would I summarize that status quo and I call it lack of sleep, I couldn’t. Could you summarize the status quo as you have it in your mind, just in a line or two?

Mariana Canto Sobral:
I think the current status quo in relation to encryption, we still see encryption as a threat to vulnerable groups and I don’t think this is how we should perceive encryption. I think encryption can be a huge ally to those who are underprivileged and those who are in a situation of power symmetries. So I think we have to question this narrative or maybe the status quo that encryption is still perceived by some actors as a barrier to protection. Security doesn’t mean that the lack of privacy doesn’t mean that we’re more secure. That’s what I meant.

Moderator:
Thank you. I think that was at least very clarifying for me and I presume also for others. Patrick, Pratik, over to you. Digital services from the Global North, they have the things that are developed in the Global North, the services that are deployed in the Global North have a large influence in shaping the internet, if only historically. Of course, that is creating dependencies on that technology for everybody, including people from the global south. India, however, has tried to face some of those challenges, and I think it would be useful for you to explain what those challenges were and what insights we can draw from India’s digital sovereignty policies that are relevant to the ongoing encryption disputes. Prateek, over to you. All right.

Prateek Waghre:
Thank you very much for having me. I was told I have about 10 minutes, so I’ve just started my timer to make sure I don’t go over. I’ve got three things that I broadly want to cover through the question, and I want to talk about encryption, but without actually, for large part, directly talking about encryption. So I’ll start with an example from India that highlights how actions taken by global north or western institutions, not just government, not just services, can have an impact in other parts of the world. I want to make a general comment on the idea of regulatory contagion, where an instrument in one part of the world or one country can be imported into other countries, even if they have different underlying objectives. And then finally, I’ll just come back to specifics about India and some of the current or recent regulatory interventions that are happening here and why some of them are concerning, especially from the encryption perspective and more broadly from the individual autonomy perspective. So as you aptly asked, digital services certainly have an influence, but I want to talk about a very specific instance. So on August 5 this year, the New York Times reported on alleged links between a quote unquote tech mogul in America and propaganda networks from China. A lot of the details are not directly relevant to the present conversation, except for one point where they included a reference to a news portal in India that also happens to be very critical of the current union government and for a number of years now has been at the forefront. the receiving end of harassment obstructions by the country’s financial investigators, who themselves, over a period of time, have become increasingly partisan in terms of the people they pursue, right? Now, as recently as last week, citing some of the allegations in that very story, law enforcement offices conducted quote-unquote raids or seized electronic devices of around 50 current, former reporters, contributors, staff of the organization called Newsclick, including arresting two people, the founder and the HR head, under a law meant for terrorism charges, right? And this law currently has a history of people being detained for months and years without a trial, right? Now, you could argue that the paper was not aware of the consequences of its reporting, but people who declined being quoted in the story have come out publicly saying that they didn’t want to deal with the story in its current form. And one of the organizations quoted has also come out and said that the ultimate report didn’t include its categorical denial, right? So this just goes to show that actions by institutions, you know, in the global North, sometimes out of ignorance, sometimes, you know, knowing fully well, can have outside effects on people in the global South and in other countries, right? And I want to quickly cross the Atlantic Ocean, right? From a US specific example and go to Germany. And of course, everyone is, you know, at IGF is likely aware of the Network Enforcement Act or on NETCG, right? Now, I’m not going to go to specifics of the provision itself, and none of this is either a comment on its effectiveness in the German context, because I do not have expertise there, but multiple scholars and researchers over the years have alluded to some of its provisions being imported by other countries, right? Especially with more authoritarian leanings and, you know, apologies in advance, I’ll just quote. through some of them very quickly and apologies if I mispronounce some of their names. But Heidi Torek in a 2021 paper stated that bills at the time in Malaysia, Vietnam, Kenya, Venezuela invoked similar language about broad and elastic categories. And that’s the quote that Russia and Singapore made references to it in some form either directly or through statements. Isabel Cannon refers again to Russia, Singapore, and Turkey and notes how some of them have incorporated provisions requiring local presence. And this has a context as we’ve seen in certain instances as reported with Apple and Google. In some cases, recent Washington Post reporting about India suggests that these local presence requirements have been used to threaten companies and their employees as well. And in a foreign policy essay, Jacob Changama and Joel Fisk cited a 2019 Freedom House report which said that since NetPG or the Network Enforcement Act was enacted, about 13 other countries have enacted similar intermediary liability regulations using a similar framework. Again, I’m making these points not directly from encryption, but just to make that these are the forces of factors at play. When you have regulation or when you have regulated designs in some of these countries, they tend to get exported. And we’ve already made references to the Online Safety Act, which is something that we are working very closely and with a lot of concern to see how some of that language then gets imported to other countries as well. Now, I want to come now specifically to India. And I’m going to sidestep the definition of digital sovereignty, because I know that that’s much contested, much debated. But as things stand, India is currently in the midst of rewriting a lot of its laws that govern digital spaces. And And unfortunately, there are several bad things in there that us as civil society organizations are concerned about. And it matters also globally, because of the sheer number of people in India, it has a precedent-setting capability. And as Kiran Ohara and Wendy Hall describe in their book, it’s a swing state for the future of a lot of regulatory practices. Now, there are three or four specific items of regulation that I’ll talk about, which is the certain directions of 2022, the current draft telecommunications bill, the Digital Personal Data Protection Act of 2023, and the current efforts to rework the intermediate liability framework through updates to current rules and potentially a new impending bill. And a common thread, before I get into specifics, a common thread that we’ll see is that a lot of them amass a tremendous amount of control and discretion for the union executive with limited oversight over their actions, and often leaving us very little to rely on other than just verbal assurances, which are not really enforceable in a lot of cases. So the certain directions, which were notified in April 2022, these impose the six-month log retention requirement on pretty much all internet services that operate in India. Specifically, also had a five-year retention period for various types of data. And there’s some nuance here, but various types of data for information about customers retained by VPNs and by cloud service providers. What this means for zero knowledge services, I think, is a huge open question. Then the intermediate liability framework, which is the IT Rules 2021 and subsequent amendments to it, these introduce what GNI has called hostage-taking laws, right. They introduced traceability requirements directly relevant in the context of end-to-end encryption, which is essentially the idea that you can trace messages on end-to-end encrypted platforms and you can trace them to the point of origin without compromising end-to-end encryption itself, right. You have provisions for a grievance committee that is appointed by the executive that has a direct say in content moderation decisions that digital services may take and more recently, you know, a fact-check unit being envisioned that will be used to flag content about the government itself as fake or false, right. Then the telecommunication bill, which expands the definition of telecommunications so broadly or define them so broadly that they can impose licensing requirements on pretty much any service on the internet and this is relevant because the licensing requirements then can potentially include obligations to intercept messages, again, a direct implication for end-to-end encryption or identity verification requirements, right. Again, for a lot of services that people rely on and, you know, have a huge implication for vulnerable populations that tend to use these services, right. Then there’s the Data Protection Act which was recently notified which imposes incidentally duties and potential for penalties on people if they withhold any information from the state and in a very interesting inversion, it grants the state the ability to process large amounts of personal data under a clause for certain legitimate uses but pretty much exempts the state from the right to privilege, right. So, you know, if I were to come back and, you know, it’s not a happy picture that I’m painting but these are some of the trends that we’re seeing currently in India. Some of them have very significant, I think, implications, right? For the ability of people to be able to use end-to-end encryption services, not only in India, but globally as well, because there is a precedent-setting ability that, you know, that it has got. I’ll pause there. I realize I think I’m done with my 10 minutes.

Moderator:
Thank you. That was a very comprehensive overview of the issues, and also clearly the sense of precedent-setting of all of this came out clearly. Pablo. Pablo is also online. Recently, WhatsApp declared that if the online safety bill were to be approved, and in fact, it has by now, the company would exit the United Kingdom. I believe there was a nuance with that, if it would be approved and implemented in the way that it is approved, so to speak. In the company’s assessment, is there a potential risk of internet fragmentation of encrypted services providing stemming from the anti-encryption policies like the one that is put forward in the UK? Pablo, please.

Pablo Bello:
Yeah, thank you so much for the invitation. I’m very glad to be with you. Sadly, from Brazil, so it’s 2 a.m. in the morning, but it’s all good. Yes, I think this is a very important question, and in particular, of course, I’m seated in this panel in representation of WhatsApp, but I am also a Global South person living in Latin America, and I have been working for the Chilean government on this kind of issues in the past. So my perspective is both the company perspective, but also mine perspective from the Global South. And yes, the company strongly believes that the threats on encryption, the risk that we are facing in- UK, in the European Union, and in other parts of the world, creates a huge risk of fragmentation in terms of imposing, in some sense, lower standards of security and privacy for global communications. Of course, internet, it’s a global network, interconnected network. If one part of the network has lower standards, that has implications to everyone. And I think it’s super important to consider the perspective of the Global South in that debate and in that regard, because the effects of decisions made in the Global North, in few countries, could have implications everywhere. And in particular, I would want to stress this idea that weakening encryption in one place affects the entire world. One data that I think is important to introduce in that discussion is not a technical data, it’s a political data. The economists presented this survey year by year regarding the quality of the democracies in the world. And only 8% of the world population lives in full democracies, only 8%. And it’s not by casualty that these countries are in the North, in the Global North mostly. And in particular, in Europe, UK, of course. And 55% of the world population lives under authoritarian regimes or hybrid regimes. 37% under authoritarian regimes where human rights are not respected. So the problem we have here is that if we, from the Global North perspective, introduce pieces of regulation that weaken encryption because they believe that their institutions are strong enough, the rule of law is strong enough, and that would be fine from their perspective. I strongly believe that that approach is wrong, but if they believe that, they should consider as well the implications of that decisions globally. And this is very important because… Because most of the people in the world lives under flower democracies or not democracies at all, without rule of law, without proper institutions, without balance of power. So when a country in particular, the UK or the European Union, make a decision in terms to weaken an encryption, it’s affecting the lives of people everywhere. The activists in Nicaragua, the activists in Venezuela, the activists in Saudi Arabia. So I think this is why it’s so important to have this discussion here at the IGF, because the implications are not limited to the borders of every country. So yes, the introduction of country level regulations that weaken an encryption has global effects. And it’s super important for the technical community, the civil society and the private sector to continue working together, trying to make the case that these decisions will be profoundly wrong. And that will create a huge, huge impact. Besides this idea that weakening encryption in one place affects the entire world, two other concepts that I want to mention. The first one, of course, is the falsehood of the trade-off between safety and privacy. Of course, this idea that you can get more security, you can get more safety, reducing privacy is completely wrong. And we know that. But it’s important to repeat this idea, because it’s in the core of this discussion. And second, weakening encryption hurts everyone. This idea that is still in the center, in the core of some of these attempts of regulation, that you can create a backdoor or you can create some way to reduce features of security for certain people is not true. We know that it’s not true. So based on these three concepts, I strongly believe that we should continue fighting against those ideas. And going to the idea of the status quo that you asked it before, I fully agree with Mariana. Nevertheless, the status quo on encryption is better than the… a terrible situation that we can be in the future if encryption is not protected. So part of the status quo is important to defend as well in order to preserve certain attributes of the internet that already we have. So yeah, I think this is my first intervention. Thank you so much.

Moderator:
Thank you, very clear, at least for me. Juliana, up to you. Brazil, 33 other countries with it have ratified the American Convention on Human Rights. It ensures, among other rights, the rights to privacy. In your assessment, how might the extraterritorial impact on anti-encryption policies influence the rights safeguarded by this convention?

Juliana Fonteles da Silveira:
Thank you. Well, it’s a pleasure to be here. I hope people online can hear me well. And I would like to say that in the Americas, when we talk about privacy rights and data protection and other interests in conflict with them, the discussion goes in another direction and assumes other interests, other frames differently from regions like Europe. The right to privacy is not usually regarded as one of the most important rights and protections in our social context. And most countries in our region don’t even have data protection legislations and are far from bringing this discussion to the table, which means that there are no procedural safeguards that could limit the state and non-state power in accessing and processing personal information, which is the central issue of anti-encryption policies. And this favors the flourishing of those very popular narratives that claim access to data for law enforcement agencies and other decryption measures. as a solution to protect other highly valid public interests such as the protection of children, security and public safety. And at the same time we are also talking about the region in Americas where in many countries an absence of rule of law, judicial independence and democratic stability prevails and because of that state abuses of all sorts are not subjected to street control. And likewise countries in Latin America and Caribbean, even the ones that have a long history of commitment to democracy, are guided by traditions of violent repression of protests, murder of journalists, persecution of human rights offenders, arbitrary arrests related to the expression of opinions, criminalization of LGBT people and of abortion for instance. And the information about all this trivial behavior is registered in private communications protected by encryption. So in this regard at the Inter-American Commission on Human Rights we have increasingly received reports on the persecution and online monitoring of activists and journalists that report cases of corruption or that represent conflicts with the interest of the political regime in their countries and the penetration of surveillance softwares and other methods of surveillance to persecute them. Also in Central America there are a growing number of legislations that criminalize and suffocate the work of NGOs being put in place and these organizations rely to a large extent on the protection of their private communications and to do their work on defending human rights and supporting the victims. So keeping all this in mind we should consider how this scenario can be pervasively viewed and these rights undermined by weakening end-to-end. encryption techniques and giving government agencies access to private communications. Because we are talking about an imagining amount of data that that offers comprehensive information about all thinkable aspects of individuals life. In context of abuse it doesn’t matter whether you have something to hide or not, being a government’s target is enough to be harassed by by digital surveillance in case where what one says says or what an organization does threatens the credibility or the legitimacy of the regime or because one’s behavior is incompatible with the government’s moral agenda. And making the content of private communications available qualifies the state capacity to conduct those arbitrary measures to an extent that we are still unaware, which ultimately choose expression and intimidates activists, activities of human rights reports, adds the pressure of reprises to to LGBTQ people or people who are pursuing reproductive rights and in some case facilitates also the tensions and killings. And journalists for example rely deeply on encrypted communications to communicate with their their sources and do their work of investigation and reporting and to shed light on issues of general concern that support the functioning of a democratic and accountable political regime. Encrypted communication has also been necessary for activists and protesters and has been threatened by states that continuously try to intercept communications in times of protest or civil unrest and people who may be at risk benefit from encrypted communications to hold opinions safely and without unlawful interference and attacks. That said, I would like to answer the question by stating that the effects of anti-encryption policies in the American go beyond the sole protection of privacy, as if you were detached from other rights. The encryption debate needs to be framed as a matter of human rights in a broader sense, and as a matter of freedom of expression, given its role as a gateway to securing the right to opinion and the collective dimension of freedom of expression, which allows the society as a whole to have access to critical information and knowledge. And the problem is still much broader than this, because introducing, for instance, the so-called backdoors or any vulnerability does not provide access only to specific actors, as it is usually claimed by these legislations. The introduction of these vulnerabilities gives all malicious actors access to private communications, and can be exploited by the same criminal and terrorist networks that the limitations aim to deter. And the consequences are severe, because we haven’t been seeing the reports of Snowden on state surveillance over foreign states around secret and strategic communication. And this highlights the effects on sovereignty and national security produced by the weakening of encrypted communications. And on top of that, particular attention must be paid to the fact that undermining encrypted communications means that we are making even more personal data and data of all aspects of an individual’s life available to private actors from the technology sector, whose capacities of merging databases and profiling are already incredibly high, and who use this data to train and feed AI models and deploy micro-targeting and recommendation strategies, for instance, which impacts the digital public debate as well. And in this sense, we We should bear in mind that digital technologies developed by these actors have become more reliant on user data based on demographics and behavior and non-encrypted private communications and private non-encrypted policies offer a vast amount of this type of data and facilitate corporate surveillance. And when it comes to these digital technologies, this has not only effects on privacy, concerns of consent and in the democratic public debate, as I said, but it is also usually related to reports on bias and discrimination on AI models and which ultimately has effects on human dignity, equality and also non-discrimination rights. Well, besides that, I would like to highlight that the rise of anti-encryption policies in jurisdictions of the Americas and in Europe could inspire other new repressive internet regulations in the region, since we have already seen an increment in rhetoric using online safety and security as a means to crack down on internet freedom across Latin America and to put forward regulatory proposals to suppress human rights in the online environment. And if all of these restriction efforts on encryption could represent these threats to human rights, especially privacy and freedom of expression, then their implementation must meet the well-known three-part test, which states that any limitation on expression must be provided for by law, may only be imposed for legitimate grounds and must conform to the strict text of necessity and proportionality. And well, under international human rights law, states are obliged to protect privacy and freedom of expression against unlawful and arbitrary interference and attacks. And just to I would like to say that we should move the conversation on digital communication in the direction of advancing people’s protection online and not people’s control and increasing surveillance, and we should be strengthening policies and enhance human rights centre approach to digital communications. Thank you.

Moderator:
Thank you. I do want to return to Professor Hata. I believe my question caught you a little bit off guard earlier, and I think you prepared some opening statements, so I want to make sure that I give you the opportunity to share your prepared thoughts.

Masayuki Hatta:
Yeah, not really, because I’m kind of confused as to whether Japan, I’m supposed to talk about Japan, but whether Japan is a global north or a global south in this context. I mean, yeah, so Japan is one of the developed countries, and we enjoy freedom, basically internet freedom and working democracy, but Japan is not really a trendsetter on this encryption regulation or something, because, you know, we don’t even have a big tech, any big tech, and so how can I say ? So I understand that the global south is increasingly regulating encryption and maybe oppressing democracy or something, and still, you know, I don’t know whether Japan is a trendsetter or not. So then, so how can I say, so even in Japan, many people actually do not support, many people do not support privacy or freedom or something. We have a bit of authoritarian tendency, and so we actually used the West or global north influence on making our policy or something, so every country could go either way, anti-encryption or pro-encryption, and many country is not originally, I guess, I’m not sure, many country naturally pursuing freedom or something, and we counter the tendency with the global north or west philosophy or policy, but still I think I heard that the global south people think the global north influence is not always good for global south policy or something, I might be misunderstanding, sorry I’m still open. being caught off guard, but I, so yeah, my main concern or my main question is, what am I supposed to talk as a member of Global North or Global South?

Moderator:
Well, thank you for your thoughts. One of the things that I thought about when discussing this, maybe there is also a West-East dimension to this, to the way that people approach this issue. But I want to turn around. I do have questions, but for the audience, either online, and I trust that somebody will take care of the online questions if they’re there, or in the room. Are people who want to add comments or say something or ask something of our panellists? Well, in that case, I’m going to ask a question, and of course, panellists, feel free to discuss among you. A question that I have around WhatsApp. So, the threat, or threat, yeah, I think it’s a threat, that WhatsApp made was at some point, if this law is going to be enforced in the UK, if Ofcom is developing technologies that will be able to scan content on the machines, and we’re pretty sure that’s not safe, then we will draw from the market. But if you do that, you provide that market no encrypted service any longer, and that means that the people who live there don’t have any means to communicate with confidentiality. Of course, that also fits in the playbook of non-democratic nations that would like to see encrypted technologies leave rather than come. What’s your thought on that? Is the threat of leaving actually the threat that you want to make?

Pablo Bello:
If I may, I’m going to start with this one.

Moderator:
Yes, please.

Pablo Bello:
I would not say that this is a threat. For WhatsApp, encryption is in our DNA. WhatsApp is an encrypted platform, and this is part of our definition. This is what we are. And it’s critical for WhatsApp to preserve that. The issue with the regulation that is being discussed in the UK and maybe in the European Union as well, is that we strongly believe that this kind of regulation will break encryption. We strongly believe that client-side scanning is against the principles and the characteristics of encryption. If that regulation is enforced and we have to implement client-side scanning in order to continue operating, that will create a huge risk, not just for the UK people, but also for the rest of the world as well. It’s not a threat. It’s the idea that in order to keep operating as an encrypted platform, it’s not feasible for us to comply with that approach if the deal is implemented in the worst way that we have considered. It’s important to clarify that. It’s not that we are pressing regulators in order to change the democratic decision of a certain country. It’s that we are saying that we will… defend encryption in the same way that we went to the Supreme Court in India to try to avoid the trustability prohibition that was decided in the IT rules, in the same way that we are debating in Brazil against trustability as well. We are defending encryption. And the idea that to break encryption in one country for a global platform, global communication platform as WhatsApp is just not feasible.

Moderator:
No, but I think even if it’s not a threat, but really the result of not being able to operate the service under your quality in a specific nation, the result is that you’re out of that nation after the regulation has been put into effect. And that might actually be also a way for countries to impose a regulation and see a signal on WhatsApp or leaving that country and leaving the population of that country without any encryption. Perhaps any of the other panelists would like to respond? Ah, there’s somebody from the audience. Thank you.

Audience:
I love this panel. And I just wanted to comment on your question, the last question. I think there is no compromise possible for these platforms. Once you compromise on encryption, then there is no going back. And then there’s just no more encryption possible. I think that the question you asked, and forgive me for this, but it’s a false dichotomy because it’s not that either you comply and then you offer your services to all these people, or you don’t and they don’t have access to it. There is another alternative to all of this. And this relates a lot to technological literacy of the people within that jurisdiction. And I think that we can work more on this. if we can do it in the same way in authoritarian places where some of these apps are forbidden, people still have access to them, then, like, the same can be applied in other form of non-authoritarian, per se, jurisdictions, too. So, I think that there are third, fourth, fifth alternative ways to do this. There’s a very good example. We just launched market blockchain, and I follow a bit the discussion on this. This has been standardized by the MLS from the ETF. This is a very good example, and it’s a very good example of how we can use blockchain to do this. In fact, some of the countries want to hinder encryption, and the first question was about how much this can create fragmentation, but in theory, if we have a network that is interoperable, the network becomes global. Depending on the client you have, you can plug in, and I hope the person in this state can still get access to the blockchain, and this is a very good example of how we can use blockchain to do this. This is a very good example of how we can use blockchain to do this, but this provision, this infrastructure can guarantee that the world is as safe as possible, and do not exist, let’s say, a monopolist that can, in fact, influence a billion of people. As far as I know, this was endorsed by other SUDs. Let’s talk about blockchain. to keep the monopoly. So, if you want to keep the monopoly, you have to have a good security for all your users, or you want to avoid it because it’s the best way to keep the monopoly.

Moderator:
Thank you.

Pablo Bello:
Hi, I’m not a technical expert on encryption at all. And it’s regarding the technicalities, how to implement that without reducing or introducing additional risk. And there is a huge discussion with different perspectives, and a lot of experts have stated that the interoperability as is proposed or decided by the European Union in DMA, could create some vulnerabilities in terms of how the information is protected. And which standard? Who will decide the standard? Why one standard? How that could create implications in terms of the development of the different standards? There are different approaches on encryption, on end-to-end encryption. Signal protocol is one. There are other approaches or other technologies. So, it’s not an easy question. It’s important to have more solutions or more protocols in the pipeline. It’s an ongoing discussion, of course. It’s not a dogmatic approach on that. The critical aspect from our perspective is to ensure that the high standards that we are introduced using the signal protocol on WhatsApp is protected. And at the same time, and this is also the other side of the discussion, is that the integrity measures are also available in order to avoid some kind of the misuse of the technology that is already a risk, that we already know that. So it’s an ongoing discussion, and I would prefer to have other people from the company explaining the technicalities of our approach.

Moderator:
We have the chair of the IAB, who knows everything about the ITF standards process.

Audience:
Not everything. Oh, almost everything. Hello, my name is Mirko Levin. I’m the chair of the Internet Architecture Board. I wanted to comment a little bit, and I would also like to comment on your part, but I also want to comment on your idea. What happens if we don’t have an encrypted communication platform anymore? And there’s one thing, which is there are a lot of circumvention techniques, so you can still access services in other countries, because the internet is a global platform. So it’s really hard to block access from just one country. I think that’s a very important point. The other point is also, encryption is not a layer. Encryption is a function that you can implement on every layer of the stack, and the stack, the way we design the internet, the way we design the internet architecture, is that you can kind of stack things on each other. So you can also always kind of add another layer with encryption somewhere else, right? So this might then be a cat and mouse game or whatever. So that’s also, I wanted something to say. It’s like, if you try to break encryption here, this is not the silver bullet, because encryption is, and security is a function, and not like something that you only deploy in one place. And then to comment on the IETF, I cannot speak for Matter, of course, but I think you cannot generally say somebody has withdrawn participation or whatever, because we design standards that are, a couple of companies are involved in the designing process, hopefully enough companies, so we can design a good protocol at the end where we have agreement. between a large enough group, but we design our protocols for like all internet participants, right? So everybody can adopt that. And for this specific case, the reason why we’re standardizing something is not necessarily interoperability. It’s not that, like hopefully we come there as well, that like different platforms maybe can talk to each other. But the reason why people came to the IETF to standardize that is to actually get this engagement from other companies and then get a really good and secure and well-designed protocol. But it doesn’t mean that like other people who don’t deploy this protocol doesn’t deploy encryption because as the person from it I just said, there are many approaches to that. And it’s not about interoperability if you talk about encryption within one platform. So I wouldn’t read too much into like who is actually actively driving work in the IETF. It’s more important who is actually adopting and deploying these technologies.

Moderator:
Yeah, and that work really came out of a DMA requirement, Digital Market Act requirement, or Digital Services Act requirement. I wanna…

Audience:
No, it’s not DMA, it’s a digital work requirement.

Moderator:
I was asking a leading question actually. So I wanna clarify that a little bit. Suppose that we lose a signal from the market, we lose WhatsApp from the market, and I think you also touched upon that. Then one would hope that people are looking for different alternatives. And of course, the issue with companies like Meta, like Signal, like Telegram, they have an office, they have a corporate presence. And my prediction would be if something like that happens, that the users that… The internet always routes around its problems. That’s a famous quote. I don’t know who said it, but the internet always routes around the problems. Or the internet only just works, always just delivers what you need. And what I think what will happen is that people will start to move. to decentralize services. Just like the Tor network, Professor Hatta, you’ve been working on that, the Tor network provides decentralized means to provide encryption. During the security conference in Las Vegas, caught DEF CON, yeah, the cult of the dead cow released valid, a new protocol that’s highly distributed and has privacy and confidentiality guarantees. Not to say that that has stand the test of time yet, but at least those type of things I think the world will move to. That was, sorry for my intervention here, but that was a leading question in some way.

Audience:
Hi, I just have a quick question, I think, for the WhatsApp representative, and it’s on this topic as well. The question for me, my focus around encryption is that it’s oftentimes one party of the two-party conversation is the one that’s requiring encryption. Oftentimes it’s the way that diasporic populations communicate with each other. And so when we talk about legislation that removes WhatsApp from UK, a country that has tons of migrants from all over the world, tons of citizens who have families in other parts of the world, is this a frame of, is this a form of oppressing the global South who are your earliest users and your earliest audiences, and they find the most use out of this to this day? Do we see the parallels between the global North shutting down communication lines that the global South uses, and then we’ve spent decades condemning folks who shut down access to Google, and do we see there might be a similarity where the North cares about encryption and shutting it down, I’m sure the South does as well, but isn’t seeing how that is an oppression? And then I go back into a question.

Moderator:
What is the question? So I’m going to come to a time later and Tia Windrup wants to put in to a discussion if I can wrap up.

Audience:
But I will take turn and build it up again. And Tom is from Facebook and that’s a really close . If they go all the way down the operating system, they go google, put android sensor, everything. When it goes to any application. It’s two companies and you are out of luck.

Moderator:
I think that’s a correct observation. The question we had online was, from

Audience:
Monika from Germany. If you can make a legal case against a government that with its encryption legislation is in violation of international human rights by which it has signed up. So, there was a typo in that sentence. But I think that’s a good question to you. Is that a case one can make? I think that’s a good question to you.

Juliana Fonteles da Silveira:
question. Well, yes, we do a lot of that in the Inter-American Commission Human Rights. We have in the Americas a lot of proposals, regulation proposals, and other efforts of non-encryption policies, including Brazil. And what we do in the Inter-American Commission Human Rights, it’s basically, we have reports, and we issue recommendations to states and to private companies on the issue. I think we have a lot of recommendations on those violations on human rights, and how this affects human rights. besides only privacy, as I said, like freedom of expression rights and discrimination, non-discrimination rights and human dignity. And we also have other mechanisms, such as public hearings, where we also receive civil society and other actors to make a case and discuss with states around those violations, so it would be something in this line.

Moderator:
Thank you, Juliana. And there was the question… Right over here. Yeah, but in the meantime, I would like to offer Paolo the opportunity to answer the other question in the room, if you can.

Pablo Bello:
Yeah, sure. Super quickly. Look, WhatsApp, of course, don’t want to stop working on UK at all. Quite the opposite. WhatsApp wants to keep operating with the highest standard of protection with end-to-end encryption worldwide. WhatsApp is mostly a Global South platform. By far, most of our users are in the Global South, India, Indonesia, Latin America, Brazil in particular. This is where most of our users are. And we have the duty and the responsibility to protect those users all the time. So, of course, we want to continue working and operating in the United Kingdom. This is what we want and this is why we are pushing hard and this is why we have this coalition with other companies and with the civil society in order to avoid having a regulation that will affect encryption. Because we strongly believe that this kind of regulation puts people’s lives in jeopardy, not just in the UK, but worldwide. So this is the point and this is what we want and this is what we will continue doing. It’s not a threat. And, of course, we want to protect the diaspora. We are trying to protect people everywhere by defending encryption and we will continue doing that.

Moderator:
Thank you very much. In the last five minutes of our panel, I want to give the panelists an opportunity to give final remarks of each one minute. Starting with Juliana.

Juliana Fonteles da Silveira:
Thank you. I would just like to say that we either have, when we are talking about encryption and human rights and online and digital communications, we don’t have an in-between situation. We either have encrypted and protected private communications or we have readable messages and communications. So we don’t have evidence that shows that mass surveillance allowed by non-encryption policies have ever been effective to proportionate safety as these policy efforts claim. So I guess it’s that.

Moderator:
We’ll always be abused, is what you say. Professor Hata.

Masayuki Hatta:
Okay. Sorry, I’m kind of confused and I couldn’t reply properly this time. I think the basic attitude is that technology doesn’t choose the country or users. So even if the country regulated it, we can use it, like WhatsApp or Apple or Signal, just walk away and just exit. So I still don’t understand this discussion’s main point. Sorry.

Moderator:
Would users, if those services would not be available, since you’ve been a contributor Do you think that is a way out?

Masayuki Hatta:
I’m not sure what way out means, but users can use it, even if it’s banned or prohibited. So the Global South program, about the regulation or something, is basically a political program in Global South countries, so I’m sorry, I still get the main point of this discussion, but maybe I’m the only one, so thank you very much.

Moderator:
Thank you, thank you. Going online, Mariana, please, final thoughts.

Mariana Canto Sobral:
I just want to thank for the debate, I think it was a very rich debate, and I leave here my voice to echo with Juliana and say that encryption is a human rights matter, and I think it’s essential that we preserve encryption, and the Global South take the position of protecting it and not threatening it, and I hope that in the near future we can adopt this position of relevance as we did post-Snowden revelations, in which we built a very strong civil rights internet framework in Brazil, and I hope that we can also take this position in relation to encryption too, in building regulation or even building policies that are going to strengthen encryption in Latin America.

Moderator:
Thank you. Pablo.

Pablo Bello:
Yeah, well, thank you so much for the invitation once again. My final message, I think it’s critical to civil society, the technical community, and the private sector reunited at this IGF to continue working together to convince some of the governments of the that we can encryption won’t make their own society safer, but we’ll put millions of people at risk. And most of them, most of the people affected are in the global south. Thanks so much.

Moderator:
Thank you. And then last but not least, Prateek. Thank you once again.

Prateek Waghre:
I think this was a very interesting discussion and conversation. Just two quick points. I think one, I will echo, I think, what have been called across the room about the need for solidarity to protect encryption and end-to-end encryption. Because I think we are headed for a slightly tumultuous period in that sense. And there is need for a lot of us to work together to ensure that it remains defended, protected and advanced in the years to come. Second, not so much of a remark, just a lead. I mentioned traceability. Some of you will be interested to know that over the last couple of weeks, there is a case in India where a high court has handed out first traceability order, which I believe WhatsApp has been able to go and get a stay for. But I would just say, watch that space and we’ll have to see how that one evolves.

Moderator:
Thank you for that. And with that, I would like to thank the panelists and the engaging audience in the room for input and comment. I hope and think there is hope for this dossier for keeping encryption available for everybody. Because we also heard that if we start eating away at it, not only at the legal side, but also technical side, that will cause a race to the bottom. And we’re not there to see that happen. Thank you very much. Thank you for watching.

Audience

Speech speed

199 words per minute

Speech length

1429 words

Speech time

430 secs

Juliana Fonteles da Silveira

Speech speed

150 words per minute

Speech length

1694 words

Speech time

676 secs

Mariana Canto Sobral

Speech speed

152 words per minute

Speech length

1237 words

Speech time

487 secs

Masayuki Hatta

Speech speed

94 words per minute

Speech length

779 words

Speech time

496 secs

Moderator

Speech speed

137 words per minute

Speech length

3145 words

Speech time

1380 secs

Pablo Bello

Speech speed

130 words per minute

Speech length

1759 words

Speech time

811 secs

Prateek Waghre

Speech speed

169 words per minute

Speech length

1842 words

Speech time

654 secs

Barriers to Inclusion: Strategies for People with disability | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Saba

The session aims to explore policies, strategies, and technologies that promote inclusive and accessible digital services for people with disabilities. It acknowledges the challenges faced by individuals with disabilities in bridging the digital divide and aims to address these challenges by examining ways to close the gap and provide equal opportunities for all. Key policy questions to be addressed include inclusive technology and digital services design, bridging the digital divide, and accessible training programs. The session emphasises the importance of inclusivity and accessibility in digital technologies and services, and highlights the ongoing efforts and commitment in this regard. The panel discussion will feature guest speakers Judith, Gunela, Teoros, Denise, and Mohamed Kamran, who are experts in the field, and their contributions will be appreciated. The session aims to provide a platform to discuss and explore innovative solutions for promoting inclusivity and accessibility in the digital realm.

Audience

During the discussion, the speakers focused on the challenges and opportunities of digital inclusion for people with disabilities. They emphasized the importance of adopting a granular approach to address the specific needs of each disability type. It was argued that people with disabilities have diverse requirements, and efforts in digital inclusion need to be both broad and deep to cater to these specific needs.

Collaboration between governments, businesses, and individuals was identified as a key driver for meaningful change in digital accessibility. The speakers stressed that fostering a collaborative environment can lead to impactful initiatives and solutions that benefit people with disabilities. This collaboration could involve sharing resources, knowledge, and expertise to achieve greater accessibility and inclusion.

Furthermore, it was emphasized that the inclusion of people with disabilities should go beyond technical accessibility and encompass the content and functionality of digital platforms. The speakers argued that it is not enough to simply make digital platforms technically accessible; it is equally important to ensure that the content and functionality of these platforms are designed in a way that caters to the needs of people with disabilities.

Another important point raised during the discussion was the need for better recognition of different types of disabilities and parameters in countries. The classification of disabilities varies from country to country, leading to inconsistencies in support and accessibility measures. Therefore, addressing this issue and working towards a more inclusive and comprehensive understanding of disabilities is crucial.

Additionally, the speakers highlighted the low representation of people with disabilities at the Internet Governance Forum and the low internet use among this population in some countries. These observations underscored the urgent need to focus on including people with disabilities in all fields, especially in internet access and participation. Increasing the representation of people with disabilities at forums and conferences, as well as improving the availability of accessible internet services, are crucial steps in ensuring their equal participation and inclusion.

In conclusion, the discussion shed light on the challenges and opportunities of digital inclusion for people with disabilities. It emphasized the importance of a granular approach, collaboration, and recognition when addressing the specific needs of different disability types. The inclusion of people with disabilities should extend beyond technical accessibility to include the content and functionality of digital platforms. Moreover, improving representation and increasing internet accessibility for this population are vital for their equal participation and inclusion.

Marjorie Mudi Gege

Stigmatization of people with disabilities is a pervasive issue that affects individuals across all sectors of society. However, it is particularly detrimental for people with disabilities as they often face additional challenges and barriers due to the stigma associated with their conditions. One of the main factors contributing to this issue is the fact that some disabilities are not immediately visible, leading to misunderstandings and further stigmatization.

Education and policy play crucial roles in combating stigmatization towards people with disabilities. Education has the power to reshape attitudes and perceptions by teaching individuals about the humanity and capabilities of people with disabilities. By educating society about the diverse range of disabilities and the unique challenges faced by individuals living with them, empathy and understanding can be fostered, reducing stigma and promoting inclusivity.

Furthermore, the implementation of effective policies is essential in addressing the issue of stigmatization. It is important to examine whether policies exist to protect individuals with disabilities and promote equal opportunities. However, determining the existence and accessibility of such policies can be a time-consuming process. Nevertheless, it is imperative to ensure that policies are in place and readily available to support individuals with disabilities, as they serve as a foundation for fostering a more inclusive and accepting society.

In addition to education and policy, advocacy is considered a never-ending but necessary process in battling stigmatization. Advocacy plays a significant role in raising awareness and promoting the rights and needs of people with disabilities. By amplifying their voices and experiences, advocates can challenge misconceptions and break down stereotypes, facilitating social change and progress towards a more equitable society.

In conclusion, the stigmatization of people with disabilities poses a significant challenge that needs to be urgently addressed. Education, policy, and advocacy are essential components in combating this issue. By promoting inclusive education, implementing effective policies, and advocating for the rights of individuals with disabilities, society can work towards creating a more accepting and inclusive environment. It is crucial that we strive to dismantle the misconceptions and prejudices surrounding disabilities, fostering a society where everyone is treated with dignity, respect, and equal opportunities.

Theorose Elikplim Dzineku

Advocacy work is crucial in making technology and digital services accessible for people with disabilities. Currently, there are several challenges that need to be addressed. Firstly, content creators often overlook the needs of individuals with disabilities, resulting in inaccessible content. Secondly, many users are unaware of the accessibility options on their devices, which limits their ability to access digital content effectively. Additionally, those who cannot afford devices with built-in accessibility features are left out, leading to a significant accessibility gap.

The issue is particularly pronounced in Africa, where limited awareness exists on how to make content accessible. This lack of understanding perpetuates the accessibility gap, further marginalising individuals with disabilities. To tackle this, capacity-building initiatives are needed to educate content creators and users on the importance of accessibility and provide them with the skills to make content accessible.

Involving people with disabilities in problem-solving and content creation is crucial. By including individuals with disabilities in the design and development of accessible content and applications, the products can better cater to their needs. Opportunities should be created to utilise the skills of people with disabilities, who often have computer science expertise.

Collaboration between organisations, governments, and civil services is essential for effective inclusion. This collaboration can lead to increased funding and support for initiatives like the inclusive tech programme in Ghana, led by Dr. Millicent, a disabled individual. The programme organises hackathons and technology training sessions for people with disabilities, empowering them with the skills to navigate digital technologies.

Stigma in online spaces is a significant challenge for individuals with disabilities, which needs to be addressed through policy interventions. People with disabilities often face discrimination and abuse online, amplified by misconceptions and the complexity of disabilities. Policies should be implemented to counteract this stigmatisation and create a safe and inclusive online environment.

Education and awareness are vital in combating stigma and prejudice. Many people have misconceptions about disabilities due to a lack of understanding. By promoting quality education and raising awareness, society can develop a more inclusive attitude towards people with disabilities, reducing inequalities.

Advocacy for disability rights and awareness must continue as an ongoing process. It is crucial to advocate for the rights of individuals with disabilities, promote accessibility and inclusion, and challenge societal barriers. This will create a more inclusive and accessible digital landscape that empowers individuals with disabilities and reduces inequalities.

Judith Hellerstein

The analysis highlights the issue of digital accessibility and inclusion for people with disabilities. Judith Hellerstein, who represents multiple interests in the sector, is an important figure. She runs her own firm, Halicyn Associates, and works directly with the US government on accessibility. Hellerstein has participated in events organized by the International Telecommunication Union (ITU) and is a co-coordinator of the Dynamic Coalition on Accessibility and Disability.

Hellerstein advocates for the development of stronger digital economies and increased accessibility. She helps countries develop their digital economies and emphasizes accessibility. Currently, only 3% of the internet is accessible for persons with disabilities worldwide. With over 1.3 billion people with disabilities globally, there is a clear need to make technology and digital services more accessible and inclusive.

The analysis supports efforts to update the Americans with Disabilities Act (ADA) and enforce Web Content Accessibility Guidelines (WCAG). These initiatives aim to legally mandate digital accessibility and ensure that companies follow guidelines to make their online content accessible to all users. Updating the ADA and enforcing WCAG 2.1 or 2.2 standards are seen as important steps in achieving greater accessibility for people with disabilities.

The analysis also points out the need for better awareness and practices in developing and designing accessible content. Companies often fail to inform developers about accessibility guidelines, creating barriers for people using screen-readers. Issues with metadata and image descriptions also contribute to the lack of accessibility. Therefore, improving awareness and incorporating accessibility practices in the development and design of digital content are necessary.

In terms of education, there is a strong argument for making programs accessible and inclusive for people with disabilities. Programs should be designed to be accessible to all, regardless of their disabilities. This applies to both online and in-person education. The analysis also highlights the importance of adequately describing pictures and diagrams used in educational materials to ensure that persons with disabilities can fully understand the content. An example is given of a person who developed a special Braille keyboard for STEM education, underscoring the need to adapt educational materials for different learning needs.

It is acknowledged that there is no “one-size-fits-all” approach to meeting the educational needs of people with disabilities. Different disabilities, such as visual impairments, hearing impairments, and cognitive disabilities, require tailored approaches to address their specific needs. Therefore, to achieve true accessibility and inclusion, it is crucial to understand and address the unique challenges faced by different groups of individuals with disabilities.

Lastly, the analysis stresses the importance of testing platforms used for education programs for accessibility. Many programs claim to be accessible, but when used as a whole, they may not meet accessibility standards. To ensure that these platforms are truly accessible, it is essential to have accessibility testers and firms that can thoroughly audit and test the programs. This will help identify and address any accessibility barriers, ensuring that people with disabilities can fully participate in education programs.

In conclusion, the analysis highlights the need for greater digital accessibility and inclusion for people with disabilities. Judith Hellerstein’s advocacy for stronger digital economies and increased accessibility, efforts to update the ADA and enforce WCAG guidelines, as well as the need for better awareness and practices in developing and designing accessible content, are all important steps towards achieving this goal. Additionally, the analysis underscores the importance of making education programs accessible and inclusive, tailoring approaches to meet the specific needs of different disabilities, and testing platforms for accessibility.

Denise Leal

The analysis suggests that there is a pressing need for greater inclusion and visibility for people with disabilities in Brazil and Latin America. The speakers argue that in order to achieve this, policies need to be implemented and there needs to be a better understanding of these policies to make education and training programs more accessible. It is highlighted that only 14% of people with disabilities in Brazil pursue higher levels of education, indicating a significant gap in access to quality education. Additionally, there is a large wage gap of almost 25% less for people with disabilities in terms of salaries in Brazil, further emphasizing the need for equal opportunities.

The analysis also points out the crucial role that technology plays in increasing accessibility and connectivity for people with disabilities. It is noted that the legal system in Brazil is primarily online, which enables individuals with disabilities to participate more effectively. Social media platforms are also becoming key venues for speech for individuals dealing with disabilities, enabling them to have a voice and share their experiences.

However, the analysis also highlights the need for appropriate moderation on social media and online spaces to protect people with disabilities from online bullying and hate speech. Instances of online bullying and hate towards people with disabilities in Latin America have been reported, and it is emphasized that moderation is necessary to safeguard individuals from online harm.

The analysis further emphasizes the importance of recognizing and accepting invisible disabilities. People with disabilities that are not easily visible often face difficulties and prejudice. It is argued that their rights are not immediately recognized, and it is imperative to raise awareness and promote acceptance of invisible disabilities.

Furthermore, the analysis emphasizes the role of communities in providing internet access for disabled individuals and other minorities. A successful example in Brazil is mentioned, where indigenous and traditional communities have taken the initiative to self-organize internet access. This highlights the potential of communities as key players in bridging the digital divide and ensuring accessibility for all.

An interesting point raised in the analysis is the question of the economic feasibility of making online content more inclusive. There is a consideration that economic interest plays a role in determining the inclusivity of online content, raising questions about the prioritization of accessibility in commercial ventures.

Lastly, the analysis laments the lack of attendance at events discussing disability issues. It is argued that more space and voice should be given to discuss situations for the improvement of infrastructure and technology for people with disabilities.

In conclusion, the speakers in this analysis shed light on various aspects related to inclusion and visibility for people with disabilities in Brazil and Latin America. They stress the importance of policies, understanding, and accessibility in education and training programs. Technology is seen as a powerful tool for connectivity and accessibility, while moderation is necessary to protect individuals from online harm. Recognition and acceptance of invisible disabilities, the involvement of communities, and the economic feasibility of inclusivity are also key considerations. The analysis highlights the need for increased attention and dialogue to address the challenges faced by people with disabilities and work towards a more inclusive society.

Gunila Astbrink

Gunila Astbrink, an influential advocate in the field of accessibility and disability within internet governance, actively supports individuals with disabilities and promotes accessibility. She is championing inclusivity by accompanying three persons with disabilities to the IGF meeting and mentoring them in their journey within internet governance. Astbrink argues that mainstream legislation and policy should include provisions for accessibility, highlighting Australia’s national disability strategy and the Telecommunications Act. She commends the Australian Communications Consumer Action Network (ACAN) for enforcing accessibility-related policies. Astbrink emphasizes the importance of including disabled individuals in policy implementation and praises ACAN’s representation of consumers and consumers with disabilities. She also promotes the use of public procurement provisions to ensure widespread use of ICT accessible products. Astbrink believes that organizations should have accessibility champions to collaborate with content and tech developers. She highlights the employment challenges faced by people with disabilities and stresses the need for greater opportunities and support. Astbrink calls for more representation from the disability community in internet governance discussions and encourages individuals to voice their concerns to the IGF Secretariat and the Multistakeholder Advisory Group (MAG). She mentions the existence of a funding program and a training program on disability in digital rights and internet governance. Overall, Astbrink’s work aims to reduce inequalities and create a more inclusive society.

Muhammad Kamran

The analysis reveals several important points discussed by the speakers. Firstly, Muhammad Kamran, a practicing lawyer from Pakistan, is highlighted as an expert in cyber crime. This establishes his credibility in the topic and sets the stage for further discussions.

The speakers also discuss the increasing prevalence of cyber crimes with the advancement of technology. This negative sentiment implies that as technology evolves, so do the methods and sophistication of cyber criminals. This poses a significant challenge for individuals, governments, and organizations to protect themselves from cyber threats.

To address these issues, the Internet Governance Forum (IGF) is presented as a platform for finding solutions to cyber crime. This positive sentiment emphasizes its importance in bringing together various stakeholders to tackle the complex issues surrounding internet governance and cyber security.

The broader impact of technology on our lives and future generations is acknowledged. This neutral sentiment indicates that technology is seen as a powerful force that influences various aspects of society. It can bring numerous benefits but also raises concerns about its potential negative consequences.

One of the positive aspects highlighted is the use of technology to assist people with disabilities. The existence of assistive apps and devices like Google Assistant is mentioned as evidence to support this argument. The sentiment here is positive, implying that technology has the potential to improve the lives of disabled individuals by providing them with greater accessibility and independence.

The speakers also emphasize the importance of making digital platforms accessible to everyone. This requires implementing features such as screen readers or captioning and involving disabled individuals in policy-making. This positive sentiment highlights the need for inclusivity and ensuring that technology is designed with consideration for people with disabilities.

The topic of disability and inclusivity continues with the understanding that disabled individuals should be considered “specially abled”. This neutral sentiment challenges societal perceptions of disability and promotes a more empathetic and positive approach towards disabled individuals.

To effectively utilize technology, it is argued that training and resources should be provided to disabled individuals. The sentiment here is positive, indicating the importance of empowering disabled individuals with the necessary skills and tools to fully engage with technology. The mention of alternative formats such as Braille or audio versions further highlights the need for accessibility.

Collaboration between tech organizations, government, and disability organizations is seen as essential to address the challenges faced by disabled individuals. This positive sentiment acknowledges that by working together, these stakeholders can combine their expertise, resources, and influence to create meaningful change and greater inclusivity.

Furthermore, the argument is made that disabled individuals should be actively involved in policy designing. This positive sentiment emphasizes the importance of consulting disabled persons when developing effective policies and programs. Their lived experiences provide valuable insights that can help create more inclusive and sustainable solutions.

Finally, the responsibility for promoting disability rights and advocacy is also stated to fall on disabled individuals themselves. This neutral sentiment implies that it is not solely the government’s responsibility, but disabled individuals should also actively participate and advocate for their own rights. This promotes a sense of empowerment and agency within the disabled community.

In conclusion, the analysis highlights various important discussions regarding cyber crime, the impact of technology, and inclusivity for disabled individuals. It stresses the need for collaboration, accessibility, and the active involvement of disabled individuals in policy-making and advocacy. These insights provide valuable considerations for addressing the challenges and opportunities presented by technology in creating a more inclusive and secure society.

Session transcript

Saba:
Good afternoon, welcome to this session on digital inclusion and accessibility. Today, we have a crucial objectives to explore policies, strategies, technologies, and that can promote inclusive and accessible digital services, especially for people with disabilities. So we aim to address the challenges they face and also identify on the ways to bridge the digital divide. Throughout this session, we will delve into three important key policy questions. First, we will address the topics on policies that can be implemented in different regions across the world to ensure that technologies and digital services are designed inclusively. And second, we will examine strategies to bridge the digital divide, empowering people with disabilities rather than marginalism. So lastly, we will explore how training and education programs can really be implemented or can be more made accessible and also inclusive to meet the needs of people with disabilities. So by the end of this session today, we hope all participants joining us from onsite and joining us from online will gain valuable knowledge on how this really properly include and provide systems, digital services, and technologies that really allow all people to actively participate and also engage in the digital world. So now, let me introduce our panelists and speakers who will shed light on these important topics. First, we have speakers from onsite here, Judit from the private sector representing the Western Europeans group. Second, we have Gunela from Asia Pacific representing the civil society group. And third, we have from online speakers, Teoros from civil society representing the African group. And lastly, we have Denise who will be onsite here representing the private sector and also from the Latin American and Caribbean group. And we also have from our online speakers, Mohamed Kamran representing the private sector Asia Pacific group. So I will give the floor to the onsite speakers to introduce themselves and also I will give the floor to our online moderator to introduce our online speakers. Thanks so much and welcome to everyone coming here.

Judith Hellerstein:
So my name is Judit Halicyn, although I have multiple hats, I do have my own firm Halicyn Associates and besides doing other policy and regulatory work trying to help make countries have more effective digital economies, I also do a lot of work on accessibility directly with the U.S. government. I’ve participated in several of the ITU, the Plenipotentiary, the Council Working Group on Intended Public Policy and other ones. But here I am also representing the Dynamic Coalition on Accessibility and Disability. We’re one of the main Dynamic Coalitions here. We had our session yesterday. And I am one of the two co-coordinators here. So I welcome you all to the session. Thank you.

Gunila Astbrink:
And I appreciate the invitation to participate in this particular session. And my name is Gunilla Astbrink and I’m based in Australia, but I work globally as chair of the Internet Society Accessibility Standing Group and Judith just mentioned the DCAD, the Dynamic Coalition on Accessibility and Disability. And through the generosity of INTSURF, we have three persons with disability able to come to the IGF to participate and I’m fortunate to be mentoring them in their progress in Internet governance. Thank you.

Marjorie Mudi Gege:
Thank you very much. Now I will give the floor to our online moderator Marjorie to introduce our online speakers. Hello. Hello, Sabah. Hello, everyone. I’m happy to be here and happy to have you all for this session. I’m Gege Marjorie. I’m in Cameroon, Africa. And I would like to introduce our online speakers. So we have Theoros Elekplim and Mohamed Kamran, who will be talking on the first policy question. Okay. So I would like to give the floor to Theoros now to introduce herself more properly.

Theorose Elikplim Dzineku:
Thank you, Marjorie. And hi, everyone. Good morning. Good afternoon. And hi from Dawn. My name is Theoros Elekplim. But I’m currently a PhD student at Penn State University. Just like any of most of our speakers, I guess I’ll borrow Judith’s words to say I share many hearts. I work with the Ghana Youth IGF as part of the CRM team member. I’m currently also part of the ISOC alumni network. And I do advocacy work and inclusion in encryption and online safety as well. So I’m glad to be here. And I hope that we have a fruitful discussion. Thank you, Marjorie.

Marjorie Mudi Gege:
Okay. Thank you very much, Theoros. So that is really nice to have you here. So I’ll give the floor to Mohamed Kamran. Mohamed, you have the floor.

Muhammad Kamran:
Hello, everyone. I hope you all are doing well. And my video finds you well. So my name is Mohamed Kamran. I’m from Pakistan. And currently I’m in Peshawar. I am a practicing lawyer. I have graduated two years ago. And my specialty is criminal law and specifically cyber crimes and such kind of things. With the passage of time, day by day, as technology is coming into our lives, the thieves are also getting smarter. And cyber crimes and such issues are also increasing day by day. So I have some expertise in that. And IGF, I think, is a platform where we can address such issues. We can find some solutions to that. And also how technology is having an effect in our life and also the generation that is coming up next, like after us, how it is affecting their lives. So I think being here is going to be fruitful for me and maybe for myself and others as well. Thank you so much for having me.

Marjorie Mudi Gege:
Okay. Thank you very much. So we’re just going to move directly into our first question, which is, what policies can be implemented in your region to ensure that technology and digital services are designed and developed to be inclusive and accessible to people with disabilities? So this policy question will be addressed by our speakers, Theoros and Judith Hellerston. So I’ll give the floor first to Judith.

Judith Hellerstein:
Hi. Thanks so much. So it’s actually a combination of policies and also awareness raising. With policies, it has to do with, I’m from the U.S. And in the U.S., we have the Americans with Disabilities Act, which works to ensure that things are, at least government right now, all government websites and other websites are accessible for persons with disability. But there’s also a movement working now as an effort to update the act to make sure that websites and other areas are accessible. And with that is the key is the guidelines to be looking at the World Wide Web, the accessibility guidelines. And with that is a series of guidelines from the W3C, which is the abbreviation of that. And they work on web content guidelines. There’s work on all types of publishing of dialogues. And the key also is that you need, everything needs to follow the WAI, the Web Accessibility Initiatives Guidelines, whether, and especially on websites, is the WCAG has to be 2.1 or 2.2. And there’s a really big problem lately because today, throughout the world, only 3% of the internet is accessible for persons with disabilities, despite there being over 1.3 billion globally. And a lot larger. So it’s a very big problem since many, and the problem is also made more problematic is that companies are not telling developers they need to follow these guidelines. And so developers are not doing it. And then they have to retrofit a system. So the real issue is what you need more is enforcement that everyone has to follow and make sure that all the sites are at least WCAG 2.1 and either AA, preferably AAA compatible. And if you look at the WAI, so it’s W3C.org, and if you look at WAI or the WCAG, you can get those. And that is really the key to make them. Besides the laws we have is also on making sure that these are accessible. And they also has an issue of metatags. So what we, when I say awareness is that people, when they’re creating sites or when they’re publishing images or other things, they’re not aware that when they take an image and that people can’t see it. And so when a person using a screen reader comes across that, it’ll just say image, or it may say possibly man, possibly with a dog, possibly who. And so all the pictures need to be described. And also what PowerPoints or any of these images. Or some people like to do cut and paste from a document. But when you’re cutting and pasting, you’re creating an image. And then it makes an accessible document inaccessible. So you have to be aware that you have to save the documents. It’s really easy in Word. You could save it and upload it or links. And the metatags is really easy. You can just right click on it. So the software is easy to use. It’s that people are just not aware.

Marjorie Mudi Gege:
Okay. Thank you very much, Judith, for that. I would like us to now have Fiora’s opinion about the policies that can be implemented in her region to ensure that technology and digital services are designed and developed to be inclusive and accessible for people with disabilities. So Fiora, you have the floor.

Theorose Elikplim Dzineku:
Thank you so much, Marjorie. And again, I’m grateful for the first speaker because she has actually tackled on part of the things that I wanted to say. And so I’m not going to repeat that again. I’m glad she did spoke about the platform regulations and how content can be more accessible online as well. So I’m speaking from the African region. And I’m going to delve more on advocacy work. Because one of the points that she made, which is very real, is the fact that people just don’t know that something like that exists. And people are not even, I wouldn’t use the word not interested, but people seem to forget that not all content is easily accessible by everyone. So even content creators in themselves do not make provisions for that. For example, when somebody is creating a YouTube video, who are the audiences in mind? And how does the person, for example, make that particular video accessible to everyone? The whole idea is to create an almost seamless way of consuming content where we don’t necessarily have to say this is for able people and this is for disabled people. Because virtually it’s just all one humanity. So access is very important. But advocacy is more important, especially in my opinion. Because the content creators and the platform moderators, platform creators, sorry, I’m not even aware of that mistake. I know for Apple devices or even some mobile phones now have accessibility options where you can have voice over text to help you out. Or you can have an easier way of navigating your phone without necessarily being able to see it or hear it. There’s a way of that. But again, within the African region, how adoptive is that? How many people are aware of that? How often do we do capacity building and training people to teach people? And the major issue is how many people can afford those devices? We’ve had in other sessions and we’ve had the issue about internet accessibility, the digital affordance of technology and devices as well. So the question is how many people can even afford the mobile devices that have those options? What is the percentage of that? And those who cannot afford those devices, what is the percentage of that? Now, I mean, it’s obviously those who are in the minority gap of affordance are huge. Now, how do those people still access content? If they go to the internet cafes in the local communities and say, I want to read something on the internet, how accessible is that content to them? How do they know about that? And how do we probably say would handle such a situation as well in our institutions, in our schools? How many schools have a computer lab that is built to cover that? Those are some of the questions that is still, I mean, some of the questions that needs to be explored. But overall, the idea is that we should come to a middle ground where we bridge the gap, where we don’t necessarily have to assume that everybody can easily access content online, and everybody should be able to understand, everybody should be able to afford the whole devices that are able to provide accessibility. I’ve always stood the whole idea, or I’ve always stand on the idea that there shouldn’t be any clear discrimination or gap to necessarily point out that this person is able and this person is not able. Technology, in its sense, should be basic to everybody. And I’m glad you really spoke about the web and how content are tagged. Again, I would ask within our African context, and I stand to be corrected on that, but how many of us know that? How many of us know how to meta-target? I was surprised that you said you can just do it in Word and right-click, and my question is now we know we can right-click and put things or meta-tags on it, but how many of us know that before this session? How many of us will remember after this session? But if there is any training and capacity building at every level in our schools, perhaps we can get used to doing that, and we can know that there are just basic and easy ways of making sure everybody’s included. And that’s where the advocacy work comes in. That’s where the grassroots community work comes in. Not that just after session speaking, we leave it behind, but after IGF and after various sessions where we speak about bridging these gaps, what do we do after that? I’m going to end here for now and leave the floor, but that’s something we can think about and take home and see what each individual can do to contribute. Thank you.

Marjorie Mudi Gege:
Thank you very much. Thank you very much, Theoros, for that very insightful opinion. So I’ll just pass the floor now to Saba, on-site moderator, for the next phase of this.

Saba:
Thank you. Thank you very much, Marjorie and Theoros and Judith for your valuable insights on the first question. Now let’s move to our second policy questions. How can the digital divide between people with disabilities and those without disabilities be bridged? What strategies can be employed to ensure that technology is used to empower rather than marginalize people with disabilities? I invite Gunela, joining us from on-site, to speak on this question, and also Mohamed Kamran, joining from online, to speak on this question. Over to you.

Gunila Astbrink:
Hello. Yes, I live in the Asia-Pacific region, which we are now in for the IGF, and I’ve been asked to speak more specifically about activities in this region. And this region is the most populous in the world. It has huge populations, as we know, over one billion, two little countries in the Pacific that might have 2,000 people. That’s it. And the greatest diversity in religion, language, culture, and economies. So there’s a lot of challenges in this region. And one of the things, though, when it comes to bridging the digital divide is to ensure that mainstream legislation and policy include clauses about accessibility. So it’s not that they are separate policies, which are very important, but having them as part of communications acts, communications policy, really does make a difference. And if we take the case in Australia, sure, we have a national disability strategy that has some key aspects on accessibility to communications technologies, but also the Telecommunications Act includes specific provisions there as well. And certainly when it comes to disability discrimination legislation. But with all of those cases, implementation has to happen. It’s one thing having the policies, but they need to be implemented. And that’s really where persons with disability come in to ensure that they are part of a process in helping implement policies. In Australia, there is funded through the federal government, and it’s actually in the Telecommunications Act, the Australian Communications Consumer Action Network, ACAN. And its role is basically to represent consumers and consumers with disabilities in government, in the private sector, to make sure that there are implementation actions happening in all of those cases. And so again, you have consumers generally being represented in this body, but specifically persons with disabilities. So there is that cross fertilization of ideas and strategies and advocacy. And I also wanted to mention the public procurement provisions that are in force in a number of countries. It started in the U.S. with something called Section 508 of the Rehabilitation Act. And it means that governments need to include ICT accessibility criteria when they purchase anything ICT related. And in Europe, it has been harmonized with European standards, and one is called EN 301549. And that is talking about user requirements by persons with disability and how to achieve that in public procurement. That has been adopted in a number of countries across the world. Kenya, for example, India, and Australia. And we want to see the implementation of that public procurement type of provision. And my final point in this particular session here is, in a mainstream organization, to try and make sure that there is an understanding of accessibility and persons with disability and bridging that digital divide. It’s really, really important to have accessibility champions. People who have some knowledge of accessibility and work in various parts of an organization and can remind content developers, any tech developers, to make sure that accessibility is included as particular products are developed. Thank you.

Saba:
Thank you very much for that insight. Now, I will give the floor to Marjorie. And then, second question will be also answered by Mohamed, joining us from online. Over to you, Marjorie.

Marjorie Mudi Gege:
Sure. Thank you, Saba. So, Mohamed, can you please take the floor? So, how can a digital divide between people with disabilities and those without disabilities be bridged? And what strategies can be employed to ensure that technology is used to empower rather than marginalize these people?

Muhammad Kamran:
Okay. Thank you, Marjorie, for the question. See, I think technology can help us in many ways, specifically when it comes to the disabled people. It can help us in various ways. There are assistive apps. There are assistive devices. Google Assistant is one of the very small examples. I think if we include that to other gadgets and to other devices, it can be helpful to us. Coming to the bridging of a technology between the disabled people and others, first of all, I think that disabled people are not only like we cannot call them disabled, but we should call them specially abled. Because if God takes one thing from you, he is going for sure he is going to give you so many other blessings where he has not blessed with that thing, the normal people. He is going to bless you with that. So, I think there are people who are specially abled to bridge the digital divide between people with disabilities and other people. It is important to normalize the digital platforms and technologies and to make it easily available to everyone. Such digital platforms can include like implementing the feature like screen readers or captioning, adjusting phones, which is available in some apps and some phones, but not everything. And I have used the word normalizing these things. So, we have to normalize all these things and we have to make it available as much as we can. Also, by prioritizing accessibility and engaging the disabled people while making these programs and while making these strategies, because if disabled people are a part of making these policies, these policies will be made so much effective on the ground root level, because they are the ones who are the effectees. They know where the lacuna lies and they are the ones who are going to tell us how we can make strategies which are the best in ground level. So, providing training and resources to individuals with disabilities can also help them navigate and utilize technology very effectively. As I have said earlier, that if we leave it to the people who are the effectees, only then are we going to get the results as much effective as we need. So, yeah, collaborating. One more thing, it is I think one of the most important things, that collaboration with the technology organizations, like those organizations which work for technologies or the government entities, and also the disabled organizations, or I would say those organizations who works for disabled people. So, technology companies and disabled organizations, along with the government entities, can collaborate while making such platforms, while making such programs or policies, I think is going to be one of the very effective ways to bridge the technology between the disabled people and those who are normal people. So, training and education programs to be made very accessible to every one of us, specifically to individuals who are disabled. Providing alternative formats, such as Braille or audio version, and physical assistance to the learning environment should be made very accessible, because only physical learning environment and the physical learning environment is going to help them in the best way. Because if let’s say if a person is blind, so if he sits there in a physical learning environment, I think he is going to learn it as fast as like none other platform. That is why I have included physical learning environment into my opinions. Incorporating assistive technologies, as I have mentioned earlier, that Google Assistant is also a very effective thing, but we can include such assistive technologies in different gadgets, in different platforms, and to different aged people, like Google Assistant is going to work the same for everyone. But if we divide it according to our age groups, for example, what a 10 years old kid will need is different from what a 25 years old person would need. So, I think dividing it into age level is also important to me. Offering flexible learning options, such as online courses, etc. If someone has no access to physical learning environment, they can be given options with online learning. For example, we are connected through online. See, some of us are sitting in Ghana, some I’m in Pakistan, some are in Japan. So, online connectivity is also very important. Same goes for the disabled people, because they’re the ones who need it more than us.

Marjorie Mudi Gege:
Oh, Mohammed, thank you. Please round it off now. Okay, I’m sorry I’m taking long. I’m sorry.

Muhammad Kamran:
So, yeah, disabled individuals are to be involved in designing all these policies. My last point would be, as I mentioned earlier, that while designing each and every policy and program, we need to consult these people, because they are the ones who are effective. So, they are the ones who are going to give us the policies which are effective on the ground. Thank you so much.

Marjorie Mudi Gege:
Thank you very much, Mohammed. That was very insightful. I learned a lot about it and many things that I don’t even know about. So, I think we can move now to our next policy question. That is policy question three. So, Sabah, we have Denise there already. You’re welcome, Denise.

Saba:
Thank you. Thank you. Thank you very much, Marjorie. Now, I really want to give the floor now to Judith to share on the trainings and also what kind of education programs can really be made to accessible and inclusive to meet the needs of people with disabilities. So, I would like to give the floor to Judith to answer this question, and please, if you have any comments or if you have any questions, feel free to raise your hand from here, and for online participants, you can also put in the chat, and Marjorie will take one of that. Judith, over to you, please, and also, please, at the end, you can also raise about your key takeaways and also recommendations on all of those topics.

Judith Hellerstein:
Sure. Thanks so much, and thank you for giving me the floor. I’m going to be leaving shortly afterwards because I’m organizing another 530 session on Policy Network on Meaningful Cognitivity, so I apologize. I have to run out afterwards, but one of the key on the training and education programs, the key is that programs needs to be designed to make accessible to all, and I know a lot of places like to use a lot of pictures and descriptions, but then all these pictures need to be described because otherwise, persons with disabilities are not getting anything out of them. We had a perfect example of one of our disability fellows who created a special braille keyboard in India for STEM education, and so you need to have a lot. You need to rethink the sessions and what you’re trying to gain out of it, and so that way, you could actually address all the people and everyone can benefit from the same session, so if you use pictures, you really have to describe them. If you use diagrams, you have to describe them because otherwise, the person is not going to be able to get there, and they’ll get frustrated, and they’ll drop off, and if you really want to have people to be active in it, they need to feel part of the whole conversation and part of the learning, and they need to partner with it, so that is really the key is that you need to have, you need to rethink how you’re going to do online education or in-person education to focus on how you’re going to meet the needs, and there’s also the problem is that there are many different needs. You have a person with visual disabilities, you’re a person with hard of hearing, and then you have a person with cognitive disabilities, and so each one is very different and has a different approach, so there’s no one, there’s no one thing fits all approach. You need to have tailored the approach to the actual group, and so that way, you can really address all the issues with a cognitive one. You have to also make sure that it’s not too many images or not too, the pictures is not taking up the whole screen because they cannot deal with all the pictures, while maybe a hard of hearing person will want that, so you really have to work with who is the community you’re trying to address, and then figure out how you’re going to address it, and how you’re going to meet needs of those, so that’s sort of what one goal you have in mind in there, and the other one is make sure everything is accessible. Make sure that the platform that you’re using is accessible. Oftentimes, people think the platform is, or the plate of the company said, oh yes it is, but it really isn’t, and so the key is often to get these programs be audited and tested by another company who’s in that business to audit and test them, and to make sure they are actually accessible, because so many programs say they are, and it could be that individually the components are, but when put together in the actual program, it no longer is accessible, so that’s why it’s the key to have accessibility testers, and have a firm who’s audited that. The enforcement is the key. Places say, oh yes, we’ll do it, oh yes, but then it may have been initially, but then as they added more and newer material, they didn’t keep up those standards, and then the whole program becomes inaccessible, so I would say that’s my takeaways. Make sure that you have an online program tested by an accessibility firm to make sure that it actually works, and that there is no, and that pictures are described, everything is described and appropriately meta-tagged so that people can be reading and seeing it. And now I have to run off to my, so sorry, but Gunala has my information and she can direct any questions, probably also answer any of them.

Saba:
Thank you, thank you very much Judith for your insightful. Now I will give the floor to Denise to answer on how trainings and educational programs can be made more accessible and inclusive to meet the needs of people with disabilities. Denise, first introduce yourself and answer. Thank you.

Denise Leal:
Hello everyone, I am Denise Leal from Brazil. I am here to represent Latin American Caribbean people. I belong to the private sector and academia and I am also part of the youth Latin American Caribbean IGF and I was part, I am a former fellow from the youth program of Brazil and you might be thinking why am I here speaking on the topic. I was also a volunteer and a teacher in the inclusion program in Brazil in Pontifico University of Goiรกs and we worked there with elderly and also people with disability. I had some experience in the topic then, so answering this question, before answering this question I wanted to say that I am very happy because we have this session, because we are giving voice to this topic, to this theme. It’s really important, it’s a thing that we need to do more often and I wish we had more participation on the topic and that the IG community really could get more involved on it. And for beginning, when we talk about training and education programs and how they can be made more accessible and inclusive to meet the needs of the people with disabilities, I think that first of all we need to give space and voice to the people with disability to speak about their situations, how they feel and what are their needs, because sometimes, very often, we don’t really give a space to them to speak about their needs or we don’t have patience enough to listen to them speaking, because we think that, oh, they speak in a different way or they hear in a different way. We must be patient when a person without disability sees and listens to a person with disability. The person really needs to be heard at that point and we need to give space and voice and power to these people to say what are their needs. And, of course, we need policies on the topic, but also we need people to understand the policies. In Brazil, we have almost 15 percent of the population as a person with disability, so it’s a large amount of people, it’s not a small number. In Latin America, it’s 85 millions of people with disability and, therefore, we need to work on policies to make training and education programs more accessible, not only the online one, but also the online one, but also the on-site. Schools need to be more accessible and when it gets to the technology topic and internet topic, what we have seen in Latin America is people with disability are getting space on social media to speak and it’s nice. When it comes to internet, the social media is having a paper, a work on the topic. They are occupying spaces on internet and it’s nice and we need to also moderate more the social media because I have seen some cases where, in Latin America, people with disability have suffered bullying, online bullying, online hate for speaking about their lives and their problems, their issues. Therefore, what we need, of course, our colleague has spoken really well on the topic that we need training, online training, online platforms that are really accessible, but also we need to moderate the social media, the websites, so that we don’t have people with disabilities suffering in these spaces from all the kinds of problems that they could suffer in online spaces. I also believe that technology plays a huge role on the topic, helping people to get more connected, to learn more. I have a friend that has spoken on the youth Latin American Caribbean IGF and he’s today a lawyer and he is a person with disability. He is able to be a lawyer because today in Brazil, the legal system is almost everything online, works online, so he is able to use the online platform, the technology, to help him in his legal activities. So today he is in a high, he has high education, he has studied and he’s planning to go for a master’s degree because of technology on education in Brazil education. I also wanted to highlight an important point, which is we have in Brazil only 14% of people in going for higher levels of education, so it’s a really small number, 14 people of all the people with disability, so it’s a really, really small number and also there is a wage gap of almost 25% less for people with disabilities in terms of salaries, so it’s so unfair and the law is, the law talks about it, we cannot have wage gap of salaries, but it is a reality. So we have policies on the theme, we have laws on the theme, but how can we make them a reality, a practical reality? We must, how is the accountability of the theme, of the topic? So I wanted to leave you with these answers and also these questions and I hope that in the next years, in the next IGFs, we have more involvement on the theme, on the topic and we have even more people with disabilities speaking and occupying spaces. Thank you.

Saba:
Thank you, thank you very much Denise for your valuable inputs. So far we have been discussing on implementing policies that really prioritise inclusive and accessible design on using technology and also digital services, especially considering the diverse needs of people that are with disabilities. You also talked about the need to develop a training in support programs to enhance the literacy, digital literacy and also the assistive technology skills for people with disabilities and also offering ongoing technical support to address the accessibility challenges have also been mentioned as well. And bridging the digital divide also has been mentioned by providing accessible access, affordable accesses to some devices and also internet connectivity, creating accessible digital content and also some of the tailored digital literacy programs as well. We have also been discussing on fostering the collaboration among different stakeholders to promote the digital inclusion and raising awareness and advocacy has also been mentioned for the rights and needs of those people in the digital realm. Now I will give the floor to any questions or comments from on-site and also please feel free to share your comments and questions in the chat and Marjorie, our online moderator, will take it from there.

Audience:
Hi everyone, I’m Joelson Diaz from Brazil. I represent here the Federal Council, the Federal Bar of the Brazilian Bar Association, although at this moment I’m speaking on my personal capacity. Well, I see this very empty room, regrettably, which means to me that we have this big challenge on including persons with disability. We talked a lot about inclusion in this conference, women, indigenous people and so on, which is quite important, of course, but it looks like the challenge to include persons with disabilities is even bigger. Well, I have a brief statement and two questions to the panelists. I’d like to express my sincere appreciation to the sponsor organizers and organizations for hosting this crucial panel on the inclusion for people with disabilities. The insights from the panelists have been deeply insightful, emphasizing the complexity, dimensions of digital accessibility. The Internet serves as a gateway for many to find partners, jobs and purchase products. Ensuring that apps and platforms are designed to allow persons with disabilities to have the same opportunities is not just about technical accessibility, but also inclusivity in content and user experience. In this regard, I have the first question. Given the diverse types of disabilities and the unique challenges which presents in the digital space, how can we ensure that our efforts in digital inclusion are not just broad, but also deep, addressing the specific needs of each disability type? Additionally, how can we foster a collaborative environment where governments, businesses and individuals work hand-in-hand to drive meaningful change in digital accessibility? And finally, in addition to technical accessibility, how can we ensure that the content and functionalities of digital platforms are inclusive, allowing persons with disabilities to fully engage in online activities such as dating, job hunting and shopping? What strategies can be employed to ensure that digital content is both accessible and relevant to their needs? Thank you very much.

Saba:
Thank you very much for that question. Now I will give the floor to Gunil to answer on the first question and second question. Maybe Tioros, if you can add on

Gunila Astbrink:
that one. Thank you. Yes, there was a lot in those two questions. I will probably start by addressing one aspect in your second question and relating to employment. And that is a huge issue. And I will refer to Vidya Y, who is a DECAD Disability Fellow here at the IGF. And she is blind and she finished her high school education with a gold medal. And she was the only one of her cohort who didn’t get a job at the end. And she has done a degree in computer science, etc. So she started her own organisation because she’s a very strong woman. But the percentages of people with disabilities who have employment is far, far too low. And what is happening in some countries, like Sri Lanka and Australia, is that there are organisations training both persons with disability in marketable jobs. But as well as that, raising awareness in companies that may employ those people. And being part of the interview process, being mentoring there for a number of months as a person joins the workforce. So that process really does make a difference. And it’s been quite successful. But it’s long term. And it’s again this thing about employers understanding that a person with a disability isn’t a liability. They are not a liability. People with disabilities have proven by many studies to be very loyal employees, very consistent in their work. And there might be some accommodations needed in the workplace. But it’s to raise that awareness of how it can be achieved. So I might pass it on to if anyone else wants to have a say on that question.

Saba:
Thank you very much. Maybe, Tiorus, if you can add on the second question, and then we proceed to other one.

Theorose Elikplim Dzineku:
Thank you so much. I’ll try to make my answer very brief, actually, so we could have more questions in there. But just on a quick one, I agree with the person that asked the question, because this is more like, I wouldn’t say very important than the other sessions, but this is a critical topic. And when he said that the room was empty, I saw the camera going on, I was like, well, yeah. And it’s always a challenge that we face, that we talk about inclusion, and yet we don’t include the people who need to be included. So it’s more like trying to solve somebody else’s problem without having the person there to give you the whole idea of that. But that not withstanding, I do agree with my earlier speaker, making the whole, I agree with her point on the idea that the people with disability have equal rights and are equally committed to their work, just like how we able people are also committed. Now, the first question is on how to address specific needs. In all honesty, to address somebody’s need, you have to speak to the person. It’s easier being an able person saying you need to build a platform, you need to do this, without necessarily talking to the disabled people. So again, I’ll come back to the whole idea of speaking to people who need the help and asking them how they want to help. It’s always better to speak to them to tell our innovations and to tell our building ideas around what they want, and not what we assume they want. So it’s always needed to have them included in every discussion we do as well. There’s also another question on collaboration, how does government and other people collaborate? Well, I don’t know if this is a simple answer, but there has to be collaboration. So reaching out to them. I know in Ghana, for example, we have an organization called inclusive tech, being owned by Dr. Millicent, she herself is disabled. She usually or mostly have hackathons for disabled people training them in innovation, building technology, that’s just one thing. I’m sure other countries have similar things as well. If we could collaborate with government or civil servants or get funding to do all those things, that should help. And my last part of concluding on this point as well, how do we create content? Again, I’ll come back to my first point. Let’s ask the people what type of content they want, if they can be involved. I’m sure that there are a lot of challenged disabled people, sorry, who are computer science people, they know how to build the app. Why don’t we give them the opportunity to do that? I guess that will help. Thank you.

Saba:
Thank you. Thank you. Thank you very much, Theoros. Since we are also running out of time, maybe for the next questions. Okay. Okay. So we have a question from onsite, and I will give the floor to Veronica.

Audience:
Hello. Can you hear me well? Hello, everyone. My name is Veronica. I’m Italian. I’m the chair of the Internet Society Youth Standing Group. Thank you, first of all, for bringing the topic of disability to the IGF. And I do actually have a question. I have a comment because I would actually like to bring my own experience. When I was a child, due to an infection, I lost between the 30 and 70 percent of my hearing ability. And for me, the interaction since I was a child has always been very difficult, especially with people that have a very low tone of voice. So I always have to ask them to repeat what they say. And hearing your panel, your intervention, I think that is something that is lacking, and it’s granularity of approach. Because not all disabilities are the same, and not all of them have to be treated like that. For example, in my experience, I don’t have full hearing capacity, and the interaction in person for me is always very difficult. In order to hear your intervention, I always have to read the subtitle. So for me, you know, those type of auxiliary instrument for me are very helpful. And on the other hand, I had to give a lighting talk the other day, where this instrument was not actually available. And for me, it was very difficult to understand people who were talking at the microphone. And just for you to say that sometimes also digital tools are very helpful. I have a better interaction online compared to the interaction I have on site or in presence. So what I would like also to invite you to consider is also that digital tools can be an amazing tool to help people with disability. Mine is de facto a disability, but it’s not recognized in any way. Because in order to be recognized as a disabled person, each country has its own parameters. So it’s not always simple, you know, to get access to, how to say, aid to on this. So, yes, this one. Thank you very much, Veronica, for your comment. And next, please. Hello, this is Umar Khan from Pakistan. I will leave some disappointment to second the Brazilian guy, and some statistics from Pakistan as well, with regards to the person with disabilities. These empty chairs in the room show the seriousness toward the inclusion of the digital literacy for the people with disabilities. So I think these rooms should have been more crowded than ever. We also have seen very less number of people with disabilities, some disability in the IGF during the international conference center. So this also shows that the actual people with the disabilities, with the inclusion of them in any field, especially in the Internet, this is somehow a disappointment. And I think the IGF should take it very serious. If I just come to the statistic of my country, my friend, my class fellow, and my college colleague is also on the panel, Mohammad Kamran. I’m so happy for him to be here on the panel. Pakistan having a population of around 236 million population, and in which 6% of the population is somehow with disability physically or mentally. But it’s also a disappointing moment that only 37% of the population among the 236 million are using Internet. The general public, only 37% of the Internet users are in the population. So you can guess, you can observe that a country having 236 million population having just 37% of the Internet user, how will you take or how will you notice the people with disabilities? So I think IGF should really work, and the technical, the companies, and the civil society should be more serious, and I’m hopeful that we can see some good number of people with disability next year in IGF happening anywhere. So I’m hopeful for that. Thank you.

Saba:
Thank you very much for that comment and questions. Now I will give the floor to our speakers on-site and online to give their final remarks or final words, and also they can address the comments and also the questions asked. I will give the floor to the speakers, and please make it very brief, up to 30 seconds. Over to you.

Gunila Astbrink:
Thank you very much. Yes, in regard to the gentleman from Pakistan, I totally agree with you, and we need to have more persons with disability attending, because the disability community’s motto is nothing about us without us. We need to be here. I have a disability. We need more people who represent our own voices, and I would strongly suggest that you write to the IGF Secretariat, to the MAG, to express your disappointment that there aren’t enough persons with disability here. The IGF Secretariat has a funding program, but it still isn’t enough people with disability coming here, and just one short point is we have, as the accessibility standing group, a training program on persons with disability in digital rights and internet governance, and paired with the DCAD, the Dynamic Coalition on Accessibility and Disability, we have a small amount of funding to bring people here, but we need so much more. So thank you very much for that point.

Denise Leal:
Hi. I would like to thank you for your participation and questions. I was a little bit worried, since we didn’t have lots of participation, but we had questions, comments, and online questions also. It’s really an important point, and well, thank you, Veronica, for what you have said. Have you spoken to us? It’s the same in Brazil, about how it’s sometimes difficult to recognize the disability of a person, and most importantly, when the person doesn’t have a disability which is visual, which you can easily see, the person suffers a lot of prejudice and doesn’t have her rights recognized immediately. So it’s important that you say to others and that we talk about it, because we also have these disabilities that we cannot see, but we must recognize their existence, that these people also need somehow help and somehow to be recognized as people with disabilities. And thank you for the Brazilian, another Brazilian person here, I don’t remember your name, Joelson, I think. Yes, thank you for being here and for your comment. I think that it’s actually difficult, hard, your question. What strategies are both accessible and relevant? How can we make the online content more inclusive? Well, we have ways to make it, but also, do we have the economic interest in making it? And also, I think that this answer and this question goes aligned with the question from my other friend, the other lawyer here from Pakistan, about the technical and infrastructure aspects for internet. I think that we have an example in Brazil, a successful example of the indigenous and traditional communities where they couldn’t have access to internet, so that the community itself organized an internet, a local internet. So maybe in the case of people with disability and other minorities, maybe the answer is the communities itself, not the disability, the people with disability community, but other minorities and local communities could be the answer to it. But we have to mobilize and to make these peoples understand the needs of all these minorities and communities. In the case of the indigenous, they could make it, because they were all located in the same place. But how can we make it? How can we make our small communities and minorities work together to find an infrastructure solution? I believe it’s possible, but we have to work harder on showing it to people. As you have all noticed and told us and spoken about it, we don’t have that many people here. We should have a crowded room, but we don’t have. So we must mobilize more people and have more space and voice to talk about these situations, so that we can have a better infrastructure and technology for people with disability. Thank you, everyone.

Saba:
Thank you. Thank you very much, Denise. Now I will give the floor to Marjorie to take the floor to our speakers for their final remarks or recommendations or any key takeaways.

Marjorie Mudi Gege:
Thank you very much, Sabah. So we have Tiaras and Mohamed remaining. And I would just please like that also we include our online participants. So I’ll read out their questions. And then while you give your final remarks, we’re just going to touch on them, please. So this is from Joseph Komiti. He said, why would make systems and apps as well as content online accessible to people with disabilities? What policies can be implemented to protect abusive content against people with disabilities online, because there is some kind of stigmatization, especially on social media. This is from Joseph, IGF Ghana Hub. And then we have the next, can our world truly progress if we continue to build barriers that exclude people with disabilities? Or should we unite to break down these obstacles of creating a more inclusive future? So maybe Tiaras, you can start. Thank you.

Theorose Elikplim Dzineku:
Great. So, again, I’m going to try as much as possible to keep it very short and all of that. Now, in terms of policies and stigmatization, I mean, this real stigmatization is really in every sector. But I guess it’s worth for, for the one with worse for people with disabilities, because again, some of the things as Veronica said, and as all our speakers keep saying over and over again, is that some of the disabilities are not like, visually, it’s not visual for you to see. And the mistake we keep doing all the time is to try to group everything as one. So this is what I would say, that policy takes time. But the question is, we even need to know if we have policy, such policies, for example, I would not really say Ghana, we do have or not, because honestly, I’ve not done data research on that. And I would not want to say that on authority. But it will be a good thing, a good thing to find out if we do have one, how accessible is that? How are people even aware of such a policy? Is there a common knowledge that we have to do in terms of stigmatization? I guess it would still stem from education, teaching people that people with disabilities are human. And there’s no problem with that. And we are okay. And everything is fine. It’s a good thing to start with. And as I conclude, I just want to take the time to just even say thank you to everyone. We hope that this session are extended more. But I will leave my email in the chat. Feel free to suggest any project any capacity building training. This is not just a one time of discussion. It could be some something that continues and hope that next year was we have discussion on the same area, we have a more fuller room and we have people who would really, really want to tell and probably share what they have been going through and how we can basically build content around them as well. But I will still stand on my first point at the beginning that advocacy is always key. Advocacy never ends, it starts. But it’s a continual process. And I hope that we all take that in. Thank you so much. And I’ll hand over to Karma.

Marjorie Mudi Gege:
Thank you. Please 30 seconds. Thank you.

Muhammad Kamran:
So thank you so much, everyone. And I hope all the answers, all the questions have already been answered. And as far like what, what we can do for implementing these ideas. I was just like, we have talked in detail about in details about all of these things. But I’m going to add one more thing. We said that government should do this and we should do this and that. But one more thing that I want to add here, why after seeing the situation in the hall in the room right now, that the people with disabilities needs to also address their issue. Like, we are not going to be like, we are not only be the ones who are going to talk about that, but they are going to make us talk about all this. Like this empty room is an example that maybe they’re not interested to be a part of all this. So they have to be interested. And we have to, yeah, we have to be organizing some conferences or some awareness sessions to educate them so that they can come up and talk about their disabilities. So I think we have talked about all the aspects possible in a very short span of time. And I’m sorry if I have left anything. And I’m thankful to all of you for having us here, for letting us speak. And I hope to see all of you in some other tomorrow. Thank you so much.

Marjorie Mudi Gege:
Thank you very much. Yes, I have the floor. Thank you.

Saba:
Thank you very much. So we have come to the end of our session now. And I would like to thank our speakers, our panelists for their valuable inputs and also contributions. Indeed, your expertise and also insights has been really, truly enlightening. And what I would like to say is let’s all together continue our efforts to ensure the inclusivity and accessibility in digital technologies and digital services. So thank you all for attending our today’s session, for joining us from on site or joining us from online. And we hope you found this very informative and thought provoking. And yeah, remember that your involvement is really crucial on creating a more inclusive future for all persons. Have a wonderful day. Thank you so much.

Audience

Speech speed

133 words per minute

Speech length

1053 words

Speech time

474 secs

Denise Leal

Speech speed

143 words per minute

Speech length

1430 words

Speech time

602 secs

Gunila Astbrink

Speech speed

117 words per minute

Speech length

1104 words

Speech time

568 secs

Judith Hellerstein

Speech speed

151 words per minute

Speech length

1418 words

Speech time

565 secs

Marjorie Mudi Gege

Speech speed

173 words per minute

Speech length

605 words

Speech time

210 secs

Muhammad Kamran

Speech speed

153 words per minute

Speech length

1318 words

Speech time

518 secs

Saba

Speech speed

131 words per minute

Speech length

1239 words

Speech time

567 secs

Theorose Elikplim Dzineku

Speech speed

172 words per minute

Speech length

1937 words

Speech time

674 secs

Beneath the Shadows: Private Surveillance in Public Spaces | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

During the discussion, various topics related to technology and data were explored, including the use of blockchain technology for collecting biometric data. An audience member asked for opinions on this matter. The sentiment towards this question was neutral, with no specific arguments or evidence provided for or against using blockchain for biometric data collection. However, it was mentioned that blockchain might be beneficial in controlling access to data, suggesting a potential advantage in using this technology for biometrics.

Another concern raised by an audience member was the issue of real-time surveillance in India. The sentiment expressed was negative, with the argument focusing on the lack of protection and rights for users in the face of such surveillance. The audience member questioned whether individuals are adequately informed when their data is being processed and if they are aware of being under surveillance in public areas. Unfortunately, no supporting facts or evidence were provided to further substantiate these concerns.

Furthermore, an audience member from Australia discussed the increasing use of advanced technology in accumulating data and enhancing private surveillance. This sentiment was negative, and the argument emphasized the implications this has for user privacy. It was highlighted that developed nations are acquiring wealth and control through the collection of data using advanced technologies. However, no specific evidence or examples were provided to support this claim.

In conclusion, the discussions surrounding blockchain technology, data security, biometric collection, and surveillance touched upon important implications for data protection and user rights. While the use of blockchain for biometric data collection was not extensively debated, the potential of blockchain in controlling data access was acknowledged. The concerns raised about real-time surveillance in India and the increasing use of advanced technology in data accumulation and private surveillance highlighted the need for protections and solutions to safeguard user privacy. Nonetheless, the lack of concrete evidence and specific supporting facts weakened the arguments presented.

Beth Kerley

The rapid increase in network surveillance of physical spaces, alongside traditional digital surveillance, has become a growing concern. It exposes individuals to potential targeting by both public and private entities. By 2021, it was predicted that the number of surveillance cameras globally would exceed 1 billion, blurring the lines between public and private surveillance.

Emerging technologies such as biometric surveillance and ’emotion recognition’ are giving those who control cameras in public spaces new capabilities. Facial recognition technologies are being sold as part of the surveillance package, enabling the identification of individuals in real time. Emotion recognition technology is also being used in different countries to monitor students, drivers, and criminal suspects. These new developments raise ethical and privacy concerns as they can be intrusive and have significant implications for personal freedom and autonomy.

The involvement of private companies in surveillance poses challenges to transparency and accountability. Private entities are inclined to protect their intellectual property, making it difficult for citizens and non-governmental organizations (NGOs) to understand how surveillance systems operate. Additionally, contracts between public and private partners often lack specific provisions on how private entities can utilise the resulting data. This lack of clear guidelines raises the risk of misuse and potential violation of privacy rights.

While biometric identification has its controversies, it also has legitimate uses that should not be overlooked. It distinguishes itself from biometric surveillance, which involves the monitoring and tracking of individuals without their explicit consent. Biometric identification allows users to intentionally use their physical attributes, such as fingerprints or facial recognition, to access a space or account. However, appropriate safeguards are needed to ensure that biometric data is properly protected and not misused by unauthorised entities.

The integration of sensitive data with blockchain technology is met with scepticism. Storing sensitive data in a system designed to be unerasable raises concerns about data security and privacy. The immutability of blockchain can be seen as both a benefit and a risk, as any potential breaches or unauthorised access may have long-lasting consequences.

European digital rights groups argue for a ban on real-time surveillance in public spaces. They believe that real-time surveillance is difficult to control and regulate, which can lead to potential abuses of power. Striking a balance between security and privacy is crucial to maintain public trust.

Furthermore, public awareness and understanding of surveillance systems and the information possessed by the government are vital. In countries like Estonia, where elaborate e-government systems are in place, public awareness is a key safeguard to ensure trust in surveillance practices.

In conclusion, the rapid expansion of network surveillance in physical spaces, coupled with emerging technologies, raises significant concerns regarding privacy, transparency, and accountability. The involvement of private companies, appropriate safeguards in biometric identification, scepticism towards integrating blockchain with sensitive data, and the need for public awareness and trust all play crucial roles in shaping the future of surveillance systems. Striking a balance between security and individual rights is essential for the responsible development and use of surveillance technologies.

Yasadora Cordova

The debate centres around the issue of control and consent regarding users’ biometric and personal data. One perspective in the debate argues that user control is vital in order to prevent the misuse of data and protect privacy. They suggest implementing rules and ethical frameworks that increase user awareness of data collection. This approach emphasises the importance of separating different types of identification technologies to improve user control and promote data privacy.

Another viewpoint suggests that the control over sensitive biometric data should be entrusted to a neutral third-party or citizens’ counsel. The proponents of this argument raise concerns about law enforcement having access to and retaining the ability to edit videos, as this encroaches on personal freedom and raises privacy concerns. They caution against potential abuse of data by law enforcement agencies.

Furthermore, it is argued that user control over their data is essential not only for privacy but also to prevent potential misuse. The introduction of new rules and ethical frameworks is proposed to enhance user awareness of data collection practices. By doing so, users would have more control over their personal information and be able to protect their privacy more effectively.

A related point that arises in the debate is the need for user control in data privacy. It is observed that both industries and governments are collecting data indiscriminately. The cost of maintaining the integrity of personally identifiable information is said to be increasing. Therefore, it is suggested that obtaining permission or consent for the use of a dataset is crucial, ensuring that the dataset belongs to the person it represents.

Transparency and ethical considerations in data handling are also highlighted as significant concerns. It is noted that structuring and cleaning data are among the most expensive activities in the machine learning process. The demand for transparency through regulation is seen as a potential driver for governments and industries to clarify their data practices. Transparency is seen as the foundation upon which user control can be built.

In conclusion, the debate surrounding user control and consent over biometric and personal data highlights the importance of protecting privacy and preventing data misuse. Various arguments propose different ways to achieve this, including implementing rules and ethical frameworks, entrusting control to neutral parties or citizens’ counsels, and promoting transparency in data handling. These discussions aim to establish a balance between leveraging the benefits of technology and safeguarding individuals’ rights and privacy in the digital age.

Barbara Simao

Private surveillance companies in Brazil, such as Gabriel and Yellow Cam, are providing readily accessible 24/7 surveillance solutions to neighborhoods without oversight or accountability. This poses major risks for privacy, human rights, transparency, and data sharing. The lack of sufficient information and oversight surrounding these surveillance practices is of particular concern, potentially impacting historically marginalized groups and leading to exclusion. Additionally, the demand for private surveillance solutions highlights a lack of trust in public government solutions. The regulatory gaps in Brazil regarding the use of technology and data for public security contribute to the lack of oversight and accountability. Users should be informed about the risks, legal grounds, and potential access to their data. Moreover, more legal guarantees and safeguards need to be developed to regulate the activities of private surveillance companies. Overall, greater transparency, public awareness, and comprehensive regulatory frameworks are essential to protect privacy and individual rights in the context of private surveillance in Brazil.

Swati Punia

Swati Punia raises concerns about surveillance automation and its approach to crime and criminality. She argues that current surveillance practices tend to focus on handling petty crimes, while larger, structural crimes like financial crimes are often overlooked. Swati emphasises the need to reassess our conceptions of crime and criminality to address these systemic issues more effectively.

Swati highlights the importance of an interdisciplinary approach in civil society to tackle surveillance-related challenges. She believes that conversation and collaboration among academics, lawyers, and NGOs are crucial in effectively addressing these issues. Swati points out that working in silos can limit the effectiveness of addressing systemic problems, and therefore, calls for shared learning and interdisciplinary efforts.

Furthermore, Swati stresses the need for collaboration and shared learning among the global majority to address surveillance-related challenges. She suggests that conferences and discussions can provide platforms for stakeholders from different parts of the world to engage in dialogue and share their experiences. Understanding shared experiences within similar socio-political and cultural contexts can lead to more effective solutions and responses.

Another aspect Swati discusses is the importance of digital literacy and empowerment. She notes that even educated individuals may lack digital literacy skills, such as understanding financial matters online. Swati suggests that the government should do more in terms of digital empowerment, ensuring that individuals have the necessary skills and knowledge to navigate the digital landscape.

In terms of technology, Swati argues that it should focus on building privacy and security by design. She proposes that with the lack of digital literacy, there should be technologies that inherently secure and respect the user’s privacy. Swati believes that prioritising privacy and security in technological developments can mitigate potential harms and protect individuals’ rights.

Swati also highlights the role of civil society organisations (CSOs) in capacity building. She mentions that her organisation, the Center for Communication Governance, actively works on initiatives such as the privacy law library, regional high court tracker, and professional training. Swati believes that CSOs play a vital role in enhancing understanding and expertise in surveillance-related matters.

Lastly, Swati suggests that countries like India should not simply copy-paste solutions from Europe or other developed countries. Instead, they should consider their own social, cultural, and political environments when implementing digital solutions. Swati notes that many developing nations are rapidly adopting advanced privacy norms without sufficient preparation, which may not be suitable given their unique contexts.

In conclusion, Swati Punia’s discussion on surveillance automation highlights the need to reassess our approach to crime and criminality. She advocates for an interdisciplinary approach in civil society, collaboration and shared learning among the global majority, digital literacy and empowerment, privacy and security by design in technology, and the role of civil society organisations in capacity building. Swati encourages countries like India to consider their own context when implementing digital solutions in order to better address surveillance-related challenges.

Moderator

The session titled “Beneath the Shadows: Private Surveillance in Public Spaces” focused on exploring the involvement of the private sector in surveillance and public security solutions, highlighting the associated risks, implications, and necessary safeguards. Despite Estela Aranha, the on-site speaker, being unable to attend, the session featured three online speakers, including Bรกrbara Simรฃo, the Head of Research in Privacy and Surveillance at Internet Lab.

Bรกrbara Simรฃo provided an overview of the topic, emphasising the role of the private sector in surveillance solutions and public security. Internet Lab, a think tank based in Brazil, specialises in digital rights and Internet policy. Bรกrbara holds a Master’s Degree in Law and Development and has extensive experience in digital rights research.

Beth Curley, a programme officer with the National Endowment for Democracy’s International Forum for Democracy, contributed to the session. Beth, who has a background in history and foreign services, discussed the challenges associated with private surveillance in public spaces. She offered insights based on her experience as the former associate editor of the Journal of Democracy.

Swati Punia, a technology policy researcher based in New Delhi, India, focused on the intersection of technology, law, and policy in society. With her legal background and expertise in privacy, data protection, and emerging technologies, she highlighted the importance of addressing these issues, particularly in developing countries. Swati’s current research involves exploring the potential of non-crypto blockchain in India and its implications for socio-economic challenges and privacy in the global South.

Representing the private sector’s perspective, Iezodara Cรณrdova, the principal privacy researcher at Unico Idetec, a biometric identity company, shared valuable insights. With a history of collaborating with esteemed organizations such as the World Bank, the United Nations, Harvard University, and TikTok, Iezodara has worked on projects concerning digital citizenship, online security, and civic engagement.

During the session, questions from the audience were addressed, allowing for engaging discussions. The speakers also shared their final thoughts on the importance of regulation and policy to tackle the concerns surrounding private surveillance in public spaces.

Overall, the session provided a valuable contribution to the ongoing discourse surrounding the role of the private sector in surveillance and public security. The speakers’ diverse backgrounds and expertise added depth and richness to the discussion, offering attendees and online participants valuable insights to consider in this ever-evolving domain.

Session transcript

Moderator:
who’s here, thank you for being present, and hello to everyone who is following us online. And welcome to our session, who is titled Beneath the Shadows, Private Surveillance in Public Spaces. The general idea of the session for us is to discuss a little bit the role of the private sector in surveillance solutions and public security solutions. So we are here trying to cover in general how the private sector has been being present in public security and surveillance solutions and the risks, implications of that, and which safeguards are important. For today’s panel, we will have three online speakers. Unfortunately, our on-site speaker was not able to be present today, Estela Aranha, but we will have three online speakers, plus Bรกrbara Simรฃo, who is the Head of Research in Privacy and Surveillance in Internet Lab, and who will also be introducing a little bit the subject. So I will introduce briefly Bรกrbara and pass the word for her so she can give us an overview of the topic, and afterwards we’ll pass to our online speakers. So Bรกrbara is the Head of Research, as I mentioned, for Privacy and Surveillance at Internet Lab. Internet Lab is a think tank on digital rights and internet policy, which is based in Brazil. She holds a Master’s Degree in Law and Development from the Fundaรงรฃo Getรบlio Vargas in Sรฃo Paulo, and she graduated in Law on the Faculty of Law at the University of Sรฃo Paulo. She was an exchange student at the Paris Panthรฉon Sorbonne. She worked as a researcher in the field of digital rights at the Brazilian Institute of Consumer Defense between 2017 and 2020, and she also served as a counsellor for the Digital Health Data Protection at FIUCRUS. So, Bรกrbara, the floor is yours.

Barbara Simao:
Hello, everyone. Good afternoon. Actually, good morning or good afternoon or good evening depending on the time zone you’re at. As Luisa mentioned, I’m Barbara and I’m Head of Research for Privacy and Surveillance at Internet Lab. And first of all, I’d like to thank you so much for coming here, for being present. And I will give you just a brief overview of what we are talking about and why we decided this would be an interesting topic for discussion. I’ll just share my screen because I have a few images that I would like to show you. Let’s see if this goes smoothly. I think you’re being able to see it, right? Yes, we can see it. Okay. So, well, the topic of the session is private surveillance in public spaces. And Luisa already mentioned and introduced a bit of what we are talking about. But I just like to give you an overview of what’s happening in Brazil that made us think that it was interesting to bring this topic to discussion today. So, in the past couple years, we are seeing the growth of these private companies called Gabriel, Yellow Cam, and, well, different names for one same kind of business. That is, these companies that sell private solutions, private cameras, private totems with cameras that with, well, 24-7 ability to access them and that are shared between neighborhoods. So, any group of neighbors, any type of local community can buy a camera or access them by a monthly fee, and they are installed into a… … … … … … … public streets. So they are offering these totems or cameras that can be easily accessed by anyone. I bring here some of the excerpts of their websites. It is in Portuguese, but I will translate it to you. In general, they say, they claim that, I’ll just see if I can point so it’s more clear. OK. In general, they claim that modern cameras, they are solutions for security anywhere. They claim that their mission is to make streets, neighborhoods, and cities more intelligent to protect anyone inside and outside home. And Yellowcam, one of these cameras, includes the app to access the cameras is 100% free. The download can be made by anyone in Play Store or Apple Store. And in the app, it’s possible to locate and city map the cameras installed and visualize the images in real time, 24-7. And it’s even possible to search for images that were took in different times or dates. They claim also that the installation of these cameras can make the region safer and that they can be accessed by the public authorities, including the police. And they claim the tendencies with the cameras is that it can decrease criminality rates from these regions over time. So they’re basically selling these 24-7 solutions that can be acquired by local communities, by a group of neighbors, and that can be accessed without any kind of oversight and accountability. And that’s what’s concerning for us. And also the fact that some news pieces here in Brazil announced that these companies were having private channels of communications with the police stations. So the police stations weren’t actually demanding a warrant to access the images held by these private companies. They were accessing it almost like real time because of these private channels of communications that were existing between the companies and the public authorities. So we think this is an important topic for us to cover because it can pose an amount of risks for privacy and human rights. It can have impacts on transparency and data sharing between private and public bodies. And besides that, it can even affect the right to the city considering the fact that surveillance may affect in a bigger form, certain groups of people that are already excluded. And this case is somewhat relatable to what happened within the Clearview AI case, which for those who aren’t familiar with it, it was a company that held a database with over 3 million images obtained through data prepping. So they data scratch public bases of images and they shared these images with police stations over the world for identification and resolution of criminal cases. And these images were collected without any kind of information, without any kind of oversight. And there wasn’t any kind of accountability about the company’s practices. And it was a case that caught the world’s attention because Clearview AI actually became inclined by many different jurisdictions because the lack of legal grounds on it was… doing it. So I think this session is to discuss these topics, is to discuss the relation between public security, criminal procedure, and these private surveillance solutions that are arising. And not only in Brazil, but many countries have these home security solutions that are also being sold. So I think it’s important for us to discuss that having the lens of the impacts it can have to privacy and human rights and for transparency. So we prepared a few policy questions for you. And in general, we want to understand what are the broader societal implications of extensive surveillance and their impacts on human rights. How does private surveillance affect historically marginalized groups? How does the lack of transparency required from private surveillance companies affect human rights? What are the dangers concerning third-party sharing with other private institutions or public authorities without transparency? What are the liabilities that insufficient legal protections regarding the shared use of data posts to individuals and groups? And does the current regulatory landscape for privacy and data protection give us sufficient protections to ensure enforcement of human rights and equitable access to public spaces? So these are a few questions we prepared for today’s session. And these are a lot of questions to discuss, only one hour. So without further ado, and giving this brief introduction, I would like to pass the floor to Beth Curley, who will also join us for this panel. And Eloisa, I think you’ll present her, right?

Moderator:
Yes. So Beth Curley, thank you for joining us today. Beth is a program officer with the research and conference section of the National Endowment for Democracy’s International Forum for Democracy. studies. She was previously associated editor of the Journal of Democracy and holds a PhD in history from Harvard University, and a Bachelor of Science, science, foreign services from Georgetown University. Thank you for being here today, and the floor is yours. Thank you. That can you hear us.

Beth Kerley:
Hi, I’m sorry I was muted. Barbara I think you need to unmute my video as well. I can’t. Now we are hearing you. I’m able to unmute my audio but not my video. I guess I can start talking and perhaps my face will show up later on in the proceedings. So, thanks Barbara and thanks everybody. I’m sorry I can’t be there in person but really looking forward to this discussion. And so, Barbara that was really fat I hadn’t seen those slides before but I think those cases that you shared are really great illustrations of some of the broader points I was hoping to make here. And so, um, oh, there I have a video to. And so I think what I’m going to do in these remarks is first try and situate those examples in some broader global trends that we’ve been tracking, and also highlight how the potential of use of emerging technologies like biometric surveillance in connection with cameras in public spaces poses additional risks. So, to frame the comments, a little bit in an essay on what he calls subversion in Rod Deibert at Citizen Lab has written about the risks from surveillance vendors, making more widely available to both government and private clients capabilities that would previously have been available to just a few well-resourced states. His focus in that article is on the profound challenges to democracy from commercial spyware, which tracks us through the devices that we carry with us. But I would argue that this question of the growing accessibility spread and if you like democratization quote unquote of surveillance technologies, and they’re intertwining with the broader surveillance capitalist ecosystem, very much applies to the devices that other people place in the physical world around us as well. And in that regard, there are three main points that I’d like to cover. First, network surveillance of physical spaces is rapidly emerging alongside traditional digital surveillance as a pervasive reality that changes the conditions for engaging in public life and exposes people to targeting by both public and private entities. Second, emerging technologies such as biometric surveillance or so called emotion recognition are enabling the entities that control cameras in public spaces to do new things with them. And third, commercial suppliers play a crucial role in the spread of physical surveillance technologies to both public and private sector clients and their involvement, as Barbara very correctly stated, presents challenges to enforcing transparency and accountability norms. So on the growth of surveillance cameras. Already in 2019 there was an estimate that by 2021 which is two years ago, the number of surveillance cameras installed globally would exceed 1 billion, a significant number of those cameras, more than half are in But established and emerging democracies are also home to staggering numbers of cameras. Those include smartphone cameras, as well as cameras that have been installed in commercial settings, which was traditionally an anti-theft measure but now you see commercial entities installing surveillance is actually a kind of consumer convenience letting people skip the checkout line to have their faces scanned instead. The line between public and private surveillance can be very thin. So the system that Barbara described would be a perfect example of that. In India, there’s similarly an app that lets citizens share footage from their private CCTVs with the police and that’s just one of many cases. Those kinds of partnerships can reflect genuine public concerns about crime, but they also raise challenging questions on how privacy and anti-discrimination safeguards can be applied when law enforcement functions get outsourced to untrained citizens. Citizens can of course themselves also misuse access to CCTV, for instance to digitally stalk strangers or acquaintances or to engage in blackmail, but the blurry line between public and private surveillance also works the opposite way. the other way around, by which I mean to say that the private vendors who supply surveillance tech to public entities play an important role that’s increasing as that tech itself gets more complicated. For instance, when companies sell smart city packages, their selling points, strengths, and profit logic can play a large role in determining what’s included as part of those packages. Some great reporting from Access Now has shown how companies like Dahua have incentivized officials in Latin America to adopt their surveillance tools by offering so-called donations. And finally, vendors after the point of adoption can become very closely involved in managing the tools and of course the data from public surveillance projects. So simple CCTV cameras present plenty of risks, but the new AI tools that researchers like IPPM have identified as the drivers of the video surveillance market are multiplying these risks by letting people make sense of the images that are captured quickly and at scale. In a 2022 report for the forum, Steve Feldstein identified 97 countries where surveillance technologies involving AI are in use. And I think it’s safe to say, given all the trends around us, that that number has since grown. Facial recognition technologies are probably the most widely discussed quote unquote enhancement to surveillance cameras. And they might be sold as part of the package together with cameras for so-called live facial recognition, but facial recognition can also be applied to ordinary camera footage after the fact using software from private vendors like their UI, which was mentioned earlier. And the risks of facial recognition have been pretty widely discussed. I think it’s the best known type of AI surveillance. So just to very quickly recap, when it doesn’t work, facial recognition can lead to false arrests, a harm that has very specifically affected black communities in both North and South America, as documented by Joy Buolamwini and others. And when it does work, facial recognition alongside other forms of biometric surveillance like voice or gate recognition make it much easier to use cameras to track specific individuals. This again has potentially legitimate purposes, but it can very easily lend itself to political abuses as with the abuse to track and identify protesters and dissidents in Russia and Belarus, which you’ve already seen. And it also puts the potential for private citizens to. abuse facial recognition technology in greater reach. So also in Russia, there was a widely publicized lawsuit in 2020 that started when an activist was able to buy images tracking her own movements from the city’s facial recognition system on the black market for just $200. There are slightly different challenges presented by emotion recognition technology, which doesn’t focus on someone’s identity per se, but tries to infer their emotional state based on facial cues. Analysts have very sharply criticized this approach as pseudoscience. And it’s also not hard to understand the ways in which it might be abused to ensure at least the perception of conformity with government policies. But there’s still a strong commercial interest in that kind of technology, whether to monitor students, drivers, and criminal suspects in China, or to test and target ads in Brazil and the United States. And again, we see this kind of technology actually installed in public spaces so that the billboard is looking back at you, so to speak. And finally, AI means that surveillance cameras in public spaces aren’t working on their own. Analytical tools can combine information from biometric surveillance tech with information from other online and offline sources, like shopping records or government databases, to build profiles of people or of groups. On an aggregate level, all this information collection can both exert a chilling effect and enable abusive behaviors by data holders, both public and private. And just to go through a few of those, first, profiles of people and groups can be the basis for targeted information operations meant to deceive and polarize something that Samantha Hoffman has worked on. As Cathy O’Neill has described, profiles can enable discrimination, whether in the form of withholding state resources, targeting advertisements in a way that disadvantages certain people, or through negative treatment by law enforcement. Third, digital rights activists worry that the mere presence of biometric surveillance in public spaces, and this would certainly extend to cameras more generally, whether or not they’re working, can have a chilling effect on people’s willingness to attend public protests, journalists’ ability to meet with sources, and other vital civic activity. And finally, profiles can enable governments to track people’s behavior in minute detail and exercise control through subtle systems of rewards and penalties in the manner. that is somewhat loosely envisioned by China’s social credit initiatives. So finally, why does it matter that private companies are so deeply involved in surveillance? I think whether we’re talking about genuinely private surveillance or public private partnerships, there are a few basic challenges and these include first data access. So vendors who partner with governments on surveillance projects are likely to have a commercial interest in keeping the data that’s collected. And that’s all the more true as companies are seeking to train and refine AI tools that depend on data. Democratic governments on the other hand have an interest in following principles like data minimization and purpose limitation for data collection. And in the project that we worked on together as part of the Forum Smart Cities Report, I know Barbara and her co-author, Belinda Santos pointed out that a lot of the ICCT contracts that she was seeing in Brazil did not have specific provisions on how those private partners could use the resulting data. And that kind of gap is a broader trend which raises a lot of risks that vendors may be reusing data which perhaps there was some privacy infringement but it was collected for a public purpose that was so important it was worth it. But then it gets reused by commercial companies for reasons that would not have justified that infringement or it may get resold through the ecosystem of data brokers or even shared with foreign governments if we’re talking about foreign companies that are operating in different countries. Second, transparency. Public institutions in democratic societies are at least in theory supposed to follow transparency norms whereas private companies are not subject to the same rules and they’re naturally going to be inclined to try to protect their intellectual property. This can make it difficult for citizens, NGOs and journalists to find out about how the surveillance systems that are watching them work. And I would argue that this is going to become a more important issue as surveillance technologies themselves get more complex and need to be evaluated for issues like encoded bias. And finally, when private surveillance feeds into public surveillance, it can be difficult to maintain clear lines of accountability for abuses. Again, these challenges are likely to grow as citizens experience infringements such as unfavorable government decisions that they can’t have explained that are made by inscrutable technologies based on a mix of public and private data that’s been collected about them. So private surveillance, especially in an age where the trend is toward cloud-based and AI enabled surveillance is very deeply entwined in a broader surveillance ecosystem that crosses boundaries of sector, of country and of the physical and the digital world. And that ecosystem is enabling new types of infringements on human rights. We see these being taken to an extreme and authoritarian settings but they’re relevant to all of us as we grapple with ways in which technology is changing the landscape for privacy. And these risks really raise the urgency especially with private actors playing such a large role of multi-stakeholder engagement to develop new guardrails for democratic rights in a world where incredibly powerful surveillance tech is now an easy reach for our governments, companies, and even private people. And on the question of solutions since I think I’m about at my time, I am going to shamelessly turn things over to the next speaker in hopes that they will provide some answers. So again, thanks very much and look forward to the discussion.

Moderator:
Thank you, Beth. Thank you so much. for the rich contributions to this discussion. And now I will pass to Swati Punia. Swati is a technology policy researcher based in New Delhi, India. She is a lawyer by training and has earned certificates in digital trade and technology, cyber law and corporate law. Currently, she works with the Center for Communication and Governance and Academic Research Center based in the National Law University of Delhi on issues that apply at the intersection of technology, law and policy in society. Her focus areas include privacy, data protection, data governance and emerging technologies. At present, she is examining the non-crypto blockchain ecosystem in India and studying its potential for addressing socioeconomic challenge, creating inclusive in governance models and embedded in privacy in the context of developing countries of the global South. Prior to joining CCG, Swati worked with a leading Southern voice on fostering consumer sovereignty in digital economy. Swati, thank you for joining us today and the floor is yours.

Swati Punia:
Thank you so much, Barbara and Elisa. It’s so lovely to be on the same panel as all of you and thank you to Beth for laying down such apt and elaborate impact and implications of surveillance. And I think that allows me to deep dive into the question that was asked of me, which is essentially like, what are the social inequalities and the discrimination, you know, regarding these kinds of surveillance acts and what in civil society do in terms of bridging some of these big cracks in the world? that we’re seeing in the society develop. I think to just sort of hinge it to what Barbara was mentioning that’s happening in Brazil is not a standalone thing. We’re seeing that happen across the world. And India is unfortunately not behind any of these trends. We’re emulating all of these trends that we’re seeing in terms of automating surveillance. And a number of states that I know were named as like the biggest surveillance states in the world, not just in the country. It seems like every state in India is sort of in a race and competing to automate surveillance. That seems to be the top priority. Having said that, the good part is the civil society has been an active player and been studying and researching and looking at this development. And they do sort of move to courts in the last couple of years that we’re seeing when we see these kinds of instances come up and rise. But I think essentially what I want to highlight is that given that you have all of these instances happening and you have these kinds of systems put in place, the most important thing is the public-private partnership aspect. Often we’re seeing these public-private partnerships add efficiency, but here I think the main question that is to what end and for what purpose? They’re not just deployed in sort of developing the technology for the state and deploying it, but they often are also involved in management of it and sort of upgrading the systems. And nobody really, this is to anybody’s guess, nobody really knows how they’re involved in with the data management or whether they’re not. Nobody knows that when a police is sort of stopping a person on the road for like random biometrics, random face recognition and all of those clicks that they take. where does that data sort of land and what purpose it is used for and unfortunately this is despite India sort of six years back in 2017 having the landmark judgment on the right to privacy passed by the Supreme Court of India which sort of gave a very spectacular turn to the jurisprudence on fundamental rights where the Supreme Court tied the right to privacy with the right to life, liberty and dignity and sort of reading it as an important facet to ensuring equality and freedom of speech and expression and also at the same time placing you know people at the heart of new age policymaking but we’ve seen not enough happening on this side but one is going to be positive with like new data protection act coming into place and all of that but one important thing that the Supreme Court categorically mentioned over there was that privacy cannot be used to further systemic inequalities. Now what that means is that everyone recognizes the fact that automation is is not creating something new, it is often exaggerating which already is pervasive and exist in the society and we all know that the kind of societies that we live in are not exactly balanced and we have a range of inequality sort of you know within our societies or really deeply entrenched so I think the main problem then is not the way you know exactly we shouldn’t really like what I mean to say we shouldn’t really go to automation but to like take a step back and see like what how do we really understand crime and criminality as a concept to really go back over there and start from there that if automation is just a tool to exaggerate everything then should we like take a step back and then try and see what are our misunderstanding and misconceptions of what is really a crime made of because if you see all these CCTVs and all of this gadget and stuff is really being into force to handle petty crimes on the street, right? In a very set place, you know, where they’re saying, okay, you’re putting in so much of resources, money and effort to handle this one type of crimes, but is that what contributes to the larger, you know, criminality in the society? What is the percentage of it? Where is this kind of like a behavior of the state or of the private sector in conjunction with the state leading us to create what kinds of society? So in that sense, I think if we go back to just see that every state is a way of defining their crime and criminality, and we can all, I think, come together on this understanding that a lot of the people that we look as criminals are often people from these historically marginalized communities, people who come from below the poverty line, people who have already experienced and lived in equal treatment from the state and from the society. There’s certain religious castes and sects who suffer already these kinds of discrimination, and that kind of social inequality then sort of gets highlighted through technology and even exaggerated and entrenched. And the fear is that a lot of these inequalities through the use of all these automated techniques that Beth also talked about, they sort of will regularize, get regularized into the way things will function going forward. So I think the million dollar question then is that the fact who is going to make that assessment that the kind of crimes we’re trying to handle, are they the real crimes? Or I’m not taking of the fact that maybe, you know, some of this should not be done, but to what end and for what purpose? Another thing that we all sort of. is the reason some of this act is being put together is that you know to check people’s behavior and you know that sort of understanding seems to develop and even become popular that if you sort of check somebody’s behavior then good behavior will get internalized if you’re constantly being surveilled and you know one cannot deny this completely because yes we’ve seen a lot of studies which support that constant surveillance might sort of create a dent in creating good behavior and some sort of internalization can happen but again then i would go back to the same question that you know how much of these kind of crimes which are getting corrected through this behavioral surveillance we’re trying to tackle and what are these crimes and are there bigger crimes like financial crimes and everything which needs maybe more attention so maybe what needs to be looked at are we trying to plug small loopholes and small gaps and turning a blind eye to like those big cracks and holes which are sort of getting deeper and wider and the fact that you know one important aspect of criminality and crime is that is crime generally as a concept behavioral or structural i think if you can go back to thinking that because my limited understanding of the whole issue is that crime is generally structural it is not behavioral and there are a lot of studies at least that i’m aware of in the indian context written by people across civil society and one work which i’d love to highlight of a colleague shivangi narayan in the indian scenario who did like an ethnographic study of one of the states in india which is the national capital delhi where she’s uh you know categorically gets into how policing and construction of the idea of criminality impacts the society and why we define and decide to employ certain kind of measures and how they do not really work for creating a better society you’re rendering a better society, but it’s actually just the opposite. So in that sense, I think we need to go back to some of these ontological and taxonomical related questions and then assess where are we moving towards and why are we moving towards that role? I think civil society’s role is extremely important. It’s often, of course, working in its own silos, like within civil society, academics are sort of working within that closed space, lawyers are with themselves, the larger NGO system are working in their own space. I think a lot of conversations with each other is important so that you can share work and build a better understanding. For example, lawyers might be looking at laws that exist even till date, despite India having the landmark judgment on right to privacy six years back. A lot of the way things are getting defined in India in terms of surveillance still is getting decided by predated laws to this Puttaswamy judgment. And even after that, we don’t see much change sort of happening. At the same time, employers working on these kinds of laws and understanding are talking with people like NGOs who are looking into ground research, who understand how these marginalized and vulnerable communities really get impacted and bring out those instances and experiences in conjunction with their study on secondary research and laws and policy framing. I think that will help us build a clearer picture and better resources. One of the reasons could also be that, these kinds of issues now will sort of help us go towards this direction. And another thing is I think conferences and discussions like the one that Hinojosa and Barbara are hosting, which is allowing people from different geographies and people across the world in the Southern Belt and global majority come together to discuss these issues and figure out, okay, these are the similarities and these are the differences. the divergences because often my understanding is a lot of similarities and synergies and experience shared experiences that we go through in the kind of familiar socio-political and cultural context that we have in this part of the world so it will be fabulous I think in terms of growing together and understanding and learning from each other’s experiences how what we can change and how we can look at the subject. I’ll stop there and happy to come and meet again. Thank you Suari. Thank you

Moderator:
for the rich reflections you have made. Now I will pass the word to Iezodara Cรณrdova who is representing the private sector. Iezodara is the principal privacy researcher at Unico Idetec, a biometric identity company. She has worked with various organizations such as the World Bank, the United Nations, Harvard University and TikTok on projects related to digital citizenship, online security and civic engagement. Thank you so much for joining us today Iezodara and the floor is yours.

Yasadora Cordova:
Right, thank you so much for the invitation. It’s always a pleasure to be in any event that the internet lab invites me and I have just a little bit of information to add. The first one that I would like to to feature is that I know that we navigate the intricacies of identification technologies so I want to delve into the nuanced distinctions between biometrics and facial recognition and because it’s where the question of user control takes the center stage. So biometrics as a comprehensive concept involves recognizing individuals through unique physiological or behavioral attributes such as fingerprints or iris scans, for example. Crucially, what sets biometrics apart is the insistence on user consent or authorization. So for example, in countries where there is a wide amount of people that have no digital literacy, it’s easier to use their biometrics to buy or have access to social benefits or even to complete transactions using their own identity if they’re using biometrics than keeping passwords, for example, because it’s safer. So I think when you call biometrics, you have to also emphasize the importance of user control over the data collected about them. The users are seeing that their data is being collected and they’re using this biometrics because they want to open up a set of opportunities that they didn’t have before because they couldn’t keep their password safe or they couldn’t use the system because it was too complicated. And in contrast, facial recognition, which is a subset of biometrics, hinges on the analysis of facial features for identification. This method can operate without explicit user consent or even awareness. So it raises concerns about privacy and freedom of expression and personal control. So here, the crucial point emerges. User control is paramount. The fact- that entities like law enforcement can retain and edit videos recorded by body cams, for example, underscores the potential misuse of data. So the power to control such sensitive information should ideally rest with a neutral third party, citizens, counsel, or something like this, at the very least, to preserve the user’s autonomy over their own identity. So preserving user control becomes not just a matter of privacy, but a safeguard against potential misuse. And it’s not an expensive safeguard. It highlights the need for robust ethical frameworks and regulations, but also highlights the need of putting the data in control of those who actually are the origin of the data, if you’re talking about biometrics. So we could create rules, international rules, or talk about rules that could separate those two types of different types of technologies, of identification technologies, so that we could have better frameworks to protect people that are being filmed, having their biometrics, their facial biometrics collected, like, for example, Clearview AI, and kind of demand that these companies have a way to inform the users that their data is being collected, and offering an option for these users to withdraw the consent or withdraw the permission of these companies to negotiate this data. or to collect or to keep this data in their user base, in their database, instead of just, how can I put this, instead of just assuming that it’s an impossible question. There is use to biometrics. Biometrics is already being used to create opportunities in some countries and make technology better and safer. But this is not going to happen if the user is not part of the decisions over their own data. So I think the crucial conversation should not be around the type of data that is being collected, because it could be biometrics, or it could be very sensitive type of data that’s being collected, and you are not aware of that. So I think control is more important than, it’s a more important question right now, than who controls this data is more important than what type of data is being collected. I think that’s it. And this is also a solution that can reach the end users and kind of help us build trust and give back the control to the users. That’s what I had to say. I’m happy to take questions or feedback later, feedback later if you have. And that’s it.

Moderator:
Thank you so much, Azadara. So now we have around nine minutes. So I will quickly open the floor to those who are here and may have a question. I will ask you to come close to the table to get a microphone. And we will do a quick round of interventions. For those who are online and have any questions or interventions, please write this on the chat. We have someone here who will get this. And after that, we’ll do a quick round of wrap up with our speakers. So we do have two questions here. Please.

Audience:
And thank you for sharing your very interesting thoughts about the data security and who should control the data. I would like to hear your opinion on the blockchain technology, how and whether do you think that the blockchain would be a solution for specifically collecting the biometric data? Do you think that might be a solution to just help to control the access to the data, the blockchain technology itself?

Moderator:
Thank you. I think we have another question there.

Audience:
Yeah. My question pertains to India, essentially. There is a very recent development just this earlier year, earlier in this year, where it was made known to the public that there is something called real-time surveillance happening. And this was in a reply to a right to information request. And the reply was from the Internet Service Providers Association. So in light of this, with our act having come into play, which is yet to come. into force. But my question is, are there any safeguards that the speakers would like to highlight? I understand one such safeguard was just mentioned, but in terms of the others for protecting users and giving them certain actionable rights, for instance, even being made aware of all the data that is being processed, and even a notice showing that they are under surveillance in public areas, specific to public areas, this is. So just wanted your thoughts on that. Thank you. So now I think we have one question on the chat. We have one question online from Ayawalesh Bashi. I’m sorry if I mispronounced your name. From Australia, from civil society. And the question is, increase advanced technologies such as AI, blockchain, EOT, IOT, NFC, NFT, QC, increase private surveillance in public spaces. All these technologies are creating big data information. And these days, data and information are wealth, the wealth accumulated in developed nations. All these technologies perform activities and services via the internet. So the question is, what will be the solution for end users? So far, that’s the only question we have in the chat. So thank you.

Moderator:
So thank you. I will get back to our speakers and do a quick round of wrap up. And I will ask you if you want to add any considerations, any final considerations. And any considerations you may have on how regulation and policy in general can work to address these concerns also. And please feel free to pick the question that you feel most comfortable to answer, so you don’t need to answer at all. So I will start back with Beth.

Beth Kerley:
Sure. So difficult questions there. But I think on the question of types of safeguards, it definitely does depend on what type of tech we’re talking about. So I agree, I would distinguish, I’m following up on Yesodara’s remarks, not just between facial recognition and other forms of biometrics, but also between biometric identification and biometric surveillance, the things you were talking about would mainly fall under biometric identification, where users basically intentionally use a certain physical aspect as the way to access a space or access their account, or what have you within a particular system. And in that context, I think it’s easier to apply the consent framework. Of course, there are also other forms of biometric surveillance besides facial recognition, that are very hard to opt into like voice recognition or gate recognition, something like a fingerprint, I think, you know, that’s the one I am willing to actually use on my phone and my computer, it’s slightly harder for someone to kind of get from you unawares. So would agree with that distinction. And I think that there are certainly, it’s a different question. So when we’re talking about biometric identification, I think there are indeed valid purposes for it. But there’s a really heightened need to establish appropriate safeguards. Because sometimes, even if you’re giving it over for a legitimate reason right now, it can end up later on in the hands of entities who you would prefer not to have it. And unlike a password, you can’t change your fingerprint as easily. And I do think that’s a fundamental distinction there. But I would agree that identification versus surveillance is important. And in terms of blockchain, I am less of an expert on blockchain instinctively. I think putting sensitive data in a system that is designed to be unerasable is a move that we should definitely think twice about, but open to arguments on that one. And real-time surveillance, finally, I think that is really the hardest thing to put safeguards around. And that’s why a lot of European digital rights groups in the context of the EU AI Act have been arguing that that’s something that should simply be banned, having constant awareness of who’s going in and out of public spaces. I think at the very least, you need to delete any data that is collected that way very clearly, and definitely agree with the suggestion of making people aware of when they’re being surveilled and what information about them the government possesses. In settings that have very elaborate e-government systems like Estonia, that’s actually part of the safeguards that are built in to ensure trust. So that could certainly be part of the answer. I do not have the comprehensive solution, unfortunately, to the challenge of emerging technologies and surveillance. Otherwise, I could write one report and go home.

Moderator:
Thank you, Ben. I think none of us has the solution. So thank you, actually, for all your contributions. And I will pass now to Swati.

Swati Punia:
Thank you, Alyosha, and you rightly say that none of us have the solution. But good that at least we’re coming together to discuss this. sort of at least think of ways that we can work together for a better response in society. I think Beth talked a lot about blockchain and my next panel right after this is on blockchain. Those interested, please join us there. But I’ll speak to the point on the consent and notice issue. I think, again, maybe this is how my brain’s wired in the last few days, that I want to step back and really look at some of the issues or the concepts that we’re bringing in the digital era of policymaking and regulation is that notice and consent, how is somebody who is from these vulnerable and marginalized communities or even people like us who we call ourselves educate spectrum class of people, we really don’t have, a lot of us, really don’t have the digital literacy. Like I would say for myself, I don’t have enough of financial literacy despite being educated. I really think that is the main issue that the government’s doing barely about anything in terms of using the word empowerment. Of course, that’s very nice and it’s used across all sorts of regulations or anything, but for somebody to use and implement and understand notice and consent, you need some level of that digital literacy. People wouldn’t even recognize, I think, harms when they sort of happen to them. So I kind of feel that a lot of like technology that is being used in the name of trust and everything should be focused towards building privacy and security by design with the kind of communities and the public that we have and the kind of work that we need to build on digital skilling and understanding should be taken much more seriously. And I think that’s where the CSOs are playing a massive role. And just to give an example, like we at the Center for Communication Governance, we’ve been building this privacy law library which traces privacy jurisprudence across 17, 18 jurisdictions. in the world. We also do like a regional high court tracker where we sort of map what is India sort of looking at in terms of privacy and the expanding rights over there, how is it tackling. This is to, and we also do capacity building for like not just students and professional, but also for judges and bureaucrats, because a lot of these people who will now come into enforcing and implementation of the new act and everything, really don’t understand the nuts and bolts of how to go about things. So India, and I think a lot of similar countries are jumping directly to like a privacy 3.0, 4.0 situation where they’ve really not lived through it gradually as Europe and some of these other countries live, right? So I think we have to be cognizant of that kind of social, cultural, political environment, and then think of ways that will fit into our specific, you know, pegs and not just like copy paste.

Moderator:
Thank you, Swati. Now we’ll pass first to Yazudara and then to Barbara so we can close the session. Due to time constraints, we won’t be able to take any more questions. I know there is someone online with their hands up, but we really need to close the session, but I do encourage you to get in touch both with us and with our other speakers. So I’ll pass it over to you, Barbara, and then we’ll move on to the next slide. Please, Yazuda.

Yasadora Cordova:
I’ll be real quick, I promise. So I think we find ourselves in an era where data is amassed indiscriminately, not just biometrics data. And this is propelled by both industries and governments. There is a demand for data. So amidst this deluge of information, the integrity of personal identifiable information has become increasingly expensive. It’s a daunting task. So this intricacies. of structuring and cleaning data, which are integral steps in the machine learning cycle. They are a challenge and this process is undeniably among the most expensive activities in this machine learning process. So I propose a pivotal shift in focus towards the user control. We know that you can control what you don’t see and this resonates in the realm of data privacy because if we need permission or consent over a data set, we need to make sure this data set belongs to that person. So if we demand this through regulation, we might end up compelling both governments and industries to bring light to the data practices. So this shift is not merely about implementing complex blockchain solutions. It’s a call to collaborate, to build transparent systems that are hand-in-hand with regulators and technologists. Of course, we will still have lots of work to do, even though we can conceive such systems that can be transparent over user data, but it’s crucial to recognize that transparency is the bedrock upon which user control stands. So it’s not just a technological challenge. It’s a societal demand. It’s a societal imperative. And I believe that we have to work collaboratively to shape a future where individuals have a meaningful say in how their data is utilized, but this for real, like in systems and where ethical considerations guide technology as a feature backlog toward the responsible and sustainable data-driven future, I guess. So that’s it.

Barbara Simao:
Well, I think that and Swati and Yasariah already answered a lot of what I was wanting to say, but I think when we are talking about solutions and regulations, especially in the case of Brazil, I think the appeal for private surveillance solutions from population in general comes from a place of insecurity and not trusting the public government solutions and they look for it in a way to overcome their lack of security felt in general. And I think the solution would be societal, as Adara mentioned, in the sense that this would also require a big level of trust of the people in general into the public institutions. And I think when we are talking about regulation also, especially in Brazil, we have a lack of regulation regarding the use of technology and data collection for public security purposes. And not that these private companies actually do public security because they are private solutions and then they are not exactly providing public security, but I think when we ask them what they are doing, they can use the argument of public security. So I think it’s a tricky scenario, it’s a regulatory scenario, and I think in Brazil we have a lot to develop yet in this sense, and I think there are many room for more guarantees and for more legal guarantees regarding it. And I think we’ll… should be awareness, should be also raised in the sense that the people that acquire these solutions are also informed on what are the risks and what are the grounds in which these companies can share data with public authorities and who might have access to it. And well, I think that’s it. I’m not sure if I added much to the discussion, but I would like to thank you both for coming and especially for the time zones, which I know weren’t so good for everyone, but thank you so much and that’s it.

Moderator:
Thank you everyone. Thank you for our speakers for all the contributions and for having joined us today. And thank you for everyone who were here today, both in person and online and who made excellent contributions and thank you. I hope you continue to have a great IGF. Thank you.

Audience

Speech speed

140 words per minute

Speech length

374 words

Speech time

160 secs

Barbara Simao

Speech speed

141 words per minute

Speech length

1483 words

Speech time

633 secs

Beth Kerley

Speech speed

172 words per minute

Speech length

2770 words

Speech time

967 secs

Moderator

Speech speed

142 words per minute

Speech length

1085 words

Speech time

460 secs

Swati Punia

Speech speed

182 words per minute

Speech length

2570 words

Speech time

846 secs

Yasadora Cordova

Speech speed

128 words per minute

Speech length

1156 words

Speech time

543 secs

Beyond universality: the meaningful connectivity imperative | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Martin Shepherd

The International Telecommunication Union (ITU) and the Office of the United Nations Secretary General’s Envoy on Technology have collaborated to establish targets for achieving universal and meaningful connectivity. To promote and measure the progress towards this goal, the ITU, along with the European Commission, has launched a project. The project has three key work streams: advocacy, measurement and capacity building, and research. These work streams aim to bring the concept of universal and meaningful connectivity to policymakers, collect and disseminate data, and track progress. This initiative acknowledges the need for everyone to have safe, satisfying, enriching, and productive online experiences at an affordable cost.

Accurate data tracking is regarded as crucial in order to make informed decisions related to universal and meaningful connectivity. The ITU’s Telecommunication Development Bureau plays a vital role in maintaining an online dashboard to track progress. This data-driven approach helps policymakers and stakeholders understand the areas that require attention and improvement. Furthermore, enhancing the statistical capacity of countries is essential to effectively measure the concept of universal and meaningful connectivity. The ITU, through its Data Analytics Division, is involved in collecting and disseminating data to support this effort.

The ITU indicators play a significant role in this project. These indicators are not limited to technical aspects but also encompass the number of internet users, their online activities, their perceptions of the connections, and their skill sets. This quantitative approach provides comprehensive insights into the supply and demand side indicators of universal and meaningful connectivity. In addition to the ITU’s quantitative indicators, UNESCO takes a qualitative approach, including many qualitative indicators in their data collection. This combination ensures a holistic assessment of universal and meaningful connectivity, enabling individual country assessments.

While the efforts of ITU and UNESCO in data collection are complementary, they are not perfectly coordinated. Nevertheless, both organizations share a common objective and are members of the Partnership on Measuring IST for Development. This cooperative approach facilitates the exchange of information and promotes a collaborative environment for advancing the measurement of universal and meaningful connectivity.

One area that presents a challenge is the lack of good quality data on how communities use the internet. The ITU has yet to collect comprehensive data that accurately reflects the usage patterns and needs of different communities. This knowledge gap hinders the formulation of targeted policies and interventions to ensure equitable access and usage of the internet.

ITU’s focus on connectivity also means acknowledging the need to address safety, affordability, and the quality of internet services. The concept of meaningful connectivity extends beyond mere access; it encompasses the quality of the connection and affordable data plans. However, assessing the value of what people do on the internet remains a complex task, and the ITU intentionally maintains its focus on connectivity rather than evaluating specific services.

The organization led by Martin Shepherd takes a human-centred approach to internet usage. They emphasize the importance of considering the needs and experiences of individuals and communities, rather than solely focusing on businesses. Additionally, they are exploring alternative sources of data to enhance understanding and measurement.

While progress continues to be made, there are areas that require improvement. Martin Shepherd’s organization acknowledges the lack of good indicators for safety and security, as well as speed, and recognizes that the realities of rural regions may not be fully reflected in the data collected. However, the commitment to continuing the ITU project and the belief in its importance remain strong.

In conclusion, the ITU, in collaboration with various stakeholders, is working towards achieving universal and meaningful connectivity. This ambitious goal involves promoting and measuring connectivity, ensuring accurate data tracking, enhancing statistical capacity, and adopting a human-centred approach to internet usage. While challenges and areas for improvement exist, the commitment to this project and belief in its significance remain unwavering. By addressing these issues and leveraging partnerships, the goal of universal and meaningful connectivity can be realized, ensuring that everyone can benefit from safe, satisfying, and enriching online experiences at an affordable cost.

Anir Chowdhury

The analysis examines the state of internet usage and connectivity in Bangladesh, shedding light on both positive advancements and areas that require improvement. One significant point of progress is the increase in internet access and broadband connectivity across the country. It is noted that different cell phone providers have successfully covered 98% of the nation with 4G network, marking a considerable achievement. Moreover, 3,800 rural locations have been connected with fibre through collaboration with the private sector, while a service obligation fund has facilitated the connection of over 700 hard-to-reach locations, such as islands or hilly areas. Additionally, a new project was initiated recently with the aim of connecting around 110,000 institutions with fibre, further enhancing connectivity.

However, concerns are raised regarding the affordability and availability of devices, which still pose barriers to internet access for many individuals. Although the regulator has managed to maintain affordable internet pricing, the penetration rate of smartphones in the country is only 52%. This indicates that a significant portion of the population still lacks access to devices that can utilise internet connectivity. Despite the progress made in extending 4G network coverage, it is highlighted that only approximately half of the available network is being utilised, further underscoring the hindrances posed by device accessibility and affordability.

Another noteworthy point discussed in the analysis pertains to advancements in AI and large language models, which have the potential to redefine digital skills and literacy. Large language models in AI could compel people to adapt and acquire new digital literacy skills, while the inclusion of native languages in these models could simplify digital interaction for individuals with low literacy levels. This demonstrates the transformative role that AI and language models can play in shaping digital skills and accessibility.

Furthermore, there is a recognition of the need to design content and services that cater to specific groups in order to bridge the digital divide and reduce inequalities. The analysis highlights that services have not been tailored for the ultra-poor, persons with disabilities, women, or Cottage Micro Small and Medium Enterprises (CMSMEs). To address this issue, attention and effort must be devoted to designing services in a meaningful manner for these specific groups.

It is worth noting that policies and technologies are being implemented to improve connectivity and digital literacy in Bangladesh. Efforts are being made to address policy matters and deploy skills and technology for development. The importance of universal and meaningful connectivity is emphasised, particularly in relation to skills development and service design. Furthermore, an equality index is being worked on, indicating a focus on promoting gender equality and the inclusion of marginalised groups.

Looking towards the future, strategic insight is highlighted as a crucial aspect. The analysis mentions the prediction of humans, devices, and robots exchanging data, and stresses the importance of adequately preparing for the needs of the next five to ten years. This emphasises the need to future-proof connectivity and explore innovative approaches for data exchange.

In conclusion, the analysis provides a comprehensive overview of the internet usage and connectivity landscape in Bangladesh. It highlights the positive developments in increasing internet access and broadband connectivity, as well as the advancements in AI and large language models. However, concerns remain regarding device affordability and availability, the need for inclusivity in content and services, and the existence of a digital divide. Policy implementations and technological advancements aim to address these issues, with an emphasis on universal and meaningful connectivity. The analysis also acknowledges the importance of gender equality and strategic foresight for future-proofing connectivity. Overall, it appreciates the insightful discussion and the attention given to the various pertinent issues.

Dr Cosmas Zavazava via Video 1

During the analysis, the speakers emphasized the importance of enhancing internet connectivity and accessibility for those who are still offline. They highlighted that approximately 2.6 billion people are currently without internet access worldwide. The aim is to improve the internet experience for those who are already connected and make it accessible to those who are offline.

The speakers argued that this goal can be achieved through partnerships and collaborations. They mentioned a recent partnership between ITU and the European Union, which aims to adopt holistic approaches to enhance the statistical capacity of countries. This collaboration demonstrates the willingness to work together for enhancing internet connectivity.

Moreover, the discussion focused on the importance of universal and sustainable digital transformation. The speakers emphasized the need for initiatives, research, and technical assistance to enable this transformation. By implementing these measures, they believe that the benefits of digital technology can be harnessed in a way that ensures inclusivity and sustainability.

The analysis provided a positive sentiment towards efforts to enhance internet connectivity. The speakers recognized the challenges involved in reaching the vast number of people who currently lack internet access. However, they expressed optimism that through strategic partnerships, collaborations, and focused initiatives, progress can be made in bridging the digital divide.

In conclusion, the analysis underscored the significance of enhancing internet connectivity and accessibility for those who are offline. It emphasized the importance of partnerships and collaborations in achieving this goal, highlighting the recent partnership between ITU and the European Union. Additionally, the analysis highlighted the focus on universal and sustainable digital transformation through the implementation of various initiatives, research, and technical assistance.

Audience

The discussion centered around the concept of meaningful connectivity and highlighted the various aspects that need to be considered to ensure its effectiveness. One key point raised was that internet access is not limited to merely establishing a connection but should also take into account the availability of services and content in local languages. This emphasises the importance of tailoring internet offerings to meet the specific needs and preferences of local communities.

Furthermore, concerns were expressed regarding the adequacy of existing indicators used to measure meaningful connectivity. It was argued that these indicators may not fully capture the complexity and granularity of the issue, and that there is a need for more nuanced data measurements to identify and address disparities within countries. The quality and accuracy of the data used in measuring meaningful connectivity were also called into question, emphasizing the importance of improving the overall quality of the data used in such measurements. There is a need for more granularity in data measurements to fully understand and address the inequalities that exist.

In addition, the discussion highlighted the importance of adopting a human-centered approach in defining meaningful connectivity. This involves considering the needs and perspectives of communities and ensuring that the benefits of connectivity are equitable and accessible to all. Policy-making should be informed by a community-centric viewpoint to better understand what aspects of connectivity are meaningful and desired by different communities.

The session also addressed the issue of limited device availability, particularly in rural areas, which hinders the full utilization of network services. Strategies to address the affordability and accessibility of devices were emphasized to ensure that connectivity reaches its full potential.

In conclusion, the discussion underscored the need to go beyond simplistic measures of connectivity and focus on meaningful and inclusive approaches. It emphasized the importance of considering local languages, addressing disparities, improving data quality, and adopting a human-centered perspective. The session highlighted the importance of ensuring that connectivity is accessible to all, regardless of their geographic location or socioeconomic status. Overall, there is a need for comprehensive strategies to ensure meaningful connectivity for all.

Alexandre Barbosa

In Brazil, there is a pressing need to address inequalities in connectivity at various levels. Firstly, there is a need to understand and tackle inequalities in terms of infrastructure, usage, and proficiency. The quality of connectivity in terms of high speed and advanced devices is crucial. However, barriers to digital usage, such as education level, socioeconomic income, age, and gender, have resulted in unequal access and usage. Proficient usage of the internet also leads to tangible outcomes such as content creation and the promotion of well-being.

Low-income households in Brazil still face limited internet access, with only 62% of such households having internet access compared to 98% of high-income households. Moreover, rural areas in Brazil also have a lower proportion of internet access compared to urban areas. This creates a significant digital divide, both geographically and socioeconomically. The South and Southeast regions of Brazil, which are wealthier, have higher proportions of fixed broadband households, while connectivity in the Amazon forest region and Northeast is mostly covered by radio or satellite. These disparities highlight the need to bridge the gap and ensure equal connectivity for all.

Despite these challenges, Brazil has embraced the concept of meaningful and universal connectivity. The country has experienced significant growth in internet usage over recent years, and there has been a rapid expansion of fiber optic connection. Policy makers in Brazil have been proactive in conducting surveys into internet usage since 2004, demonstrating a commitment to understanding and addressing connectivity issues.

In addition to access and infrastructure, digital skills play a pivotal role in promoting meaningful connectivity. Mobile-only users in Brazil display a lesser proportion of digital skills compared to computer and mobile phone users. Without digital skills, the full potential of the internet cannot be harnessed.

Furthermore, Brazil places importance on data protection and privacy. The country has implemented surveys to measure alignment with personal data protection laws, indicating a strong commitment to safeguarding individuals’ information.

To enhance connectivity and address inequalities effectively, it is crucial to have universal and meaningful connectivity indicators in a disaggregated format. National averages without disaggregation may not accurately capture the extent of inequalities within a country. Therefore, a more nuanced approach is needed to accurately assess the state of connectivity and identify areas that require improvement.

However, concerns about the quality and availability of data persist. It is important to ensure the reliability and accessibility of data, as well as to promote the production of high-quality data. This can be achieved through conducting primary data and using internationally recommended methodologies with probability samples that provide disaggregated data.

Despite efforts to bridge the digital divide and promote universal and meaningful connectivity, a human-centered approach is lacking in the design and implementation of connectivity initiatives in Brazil. By prioritising the needs and perspectives of individuals, a more inclusive and equitable approach to connectivity can be achieved.

The concept of Universal Media Connectivity (UMC) is of utmost importance in the current era of disinformation and lack of skills for content creation and critical use of the internet. Digital literacy and content creation skills are vital for individuals to navigate the digital landscape effectively and contribute meaningfully. Brazil, along with other countries, should produce data that can measure progress towards achieving the UMC concept, further emphasising the importance of tracking and monitoring connectivity goals.

In conclusion, Brazil faces significant inequalities in connectivity in terms of infrastructure, usage, and proficiency. While progress has been made, challenges remain, particularly in bridging the digital divide and promoting universal access. By prioritising digital skills, data protection, and a human-centered approach, Brazil can enhance connectivity and ensure that all individuals have equal opportunities to benefit from the digital era.

Peter Mariรซn

The European Union (EU) strongly supports the concept of universal meaningful connectivity, recognizing its importance in achieving sustainable development goals. The EU is collaborating with the International Telecommunication Union (ITU) to work on this concept. It believes that robust data collection is crucial for measuring progress and success in achieving objectives. This perspective aligns with the EU’s emphasis on data governance and the value it places on accurate and comprehensive data to drive effective decision-making.

In line with its commitment to promoting digital transformation, the EU advocates for a human-centric approach. It prioritises the individual and aims to bridge the digital divide by ensuring access to an open and free internet. The EU also emphasises the protection of privacy and security in the digital realm.

The EU has taken initiatives to enhance cybersecurity, a vital aspect of safe and secure connectivity. It has established a regional cybersecurity hub in the Dominican Republic and is actively involved in the BELA program, focusing on cybersecurity. The EU mainstreams cybersecurity in its programming, recognising its significance in the rapidly evolving digital landscape.

A key argument put forth by the EU is the need to link infrastructure investment with investments in soft elements such as data governance, digital skills, and e-government. The EU’s collaborative efforts with Kenya in the digital package collaboration highlight the importance of this approach. Measures to improve last-mile digital connectivity, enhance vocational education, and implement data protection and procurement legislation have been implemented to ensure a comprehensive and inclusive digital ecosystem.

Data collection is deemed fundamental for effective planning and implementing strategies. However, collecting data at local levels can present challenges. The EU recognises both the importance of having data and the difficulties faced when collecting it in field and partner countries. This understanding underscores the EU’s commitment to leveraging partnerships for data collection and analysis to make informed decisions.

Despite the EU’s efforts, last-mile connectivity remains a challenge. It recognises that achieving universal connectivity necessitates the participation of both private and public operators, who must find it appealing to invest in infrastructure in remote areas.

The EU also acknowledges the need for foresight about future requirements. New technologies, skills, and systems may be necessary to address the evolving demands of the digital era. This highlights the EU’s commitment to staying ahead of the curve and ensuring that its strategies and policies are adaptable to technological advancements.

In conclusion, the EU is strongly committed to various aspects of digital development. It supports the concept of universal meaningful connectivity, promotes a human-centric digital transformation, and takes initiatives to enhance cybersecurity. The EU emphasises the importance of investing in both hard infrastructure and soft elements like data governance and digital skills. It recognises the significance of data collection and the challenges associated with it at the local level. The EU acknowledges the struggle with last-mile connectivity and the need to anticipate and adapt to future requirements. Finally, the EU advocates for taking action and making things better through organisations dedicated to improving health, education, and combating climate change.

Video 2

Universal and meaningful connectivity is crucial for driving digital transformation and working towards the achievement of sustainable development goals. It allows individuals to access a wide range of essential services such as education, healthcare, government services, and job opportunities. Universal connectivity helps bridge the digital divide, ensuring that everyone can participate in the digital age.

To effectively track progress towards universal connectivity, measurement and data are essential. Proper data usage enables better decision-making by providing insights into past, current, and future positions. The International Telecommunication Union (ITU) and the Office of the United Nations Secretary-General’s Envoy on Technology have established aspirational targets to guide efforts in this area. The ITU’s Telecommunication Development Bureau maintains an online dashboard, which transparently monitors and tracks progress towards universal connectivity.

Promoting universal connectivity requires a combined global effort. Recognizing this, the ITU and the European Commission have launched a global project that facilitates the expansion of connectivity. This project demonstrates the positive stance towards achieving universal connectivity and the commitment of various stakeholders to collaborate and make it a reality.

In conclusion, universal and meaningful connectivity are fundamental for digital transformation and the attainment of sustainable development goals. It provides individuals with access to essential services and promotes inclusivity in the digital era. By utilizing effective measurement techniques and tracking progress through data, we can move closer to achieving universal connectivity. The collaborative efforts of organizations like the ITU and the European Commission highlight the importance of global partnerships in accomplishing this noble goal.

Moderator

The session focused on the importance of universal and meaningful connectivity and the role of policymakers in achieving this goal. Its aim was to discuss the definition, reach, and impact of universal and meaningful connectivity, with the goal of exploring how it can improve the quality of life for all people. The concept of meaningful connectivity was emphasized throughout the session as a way to understand and address digital inequalities. The session also highlighted the need for robust measurement policies to ensure connectivity, with a suggestion to create a Universal and Meaningful Connectivity (UMC) Dashboard. Lithuania was commended for its progress in reaching UMC targets, particularly in ensuring broadband connectivity in rural areas. The importance of developing digital skills and promoting gender diversity in the tech industry was emphasized. Collaboration between governments, the private sector, and civil society was deemed essential for successful implementation of digital strategies. The digital divide in Brazil was discussed, along with issues of data accuracy and granularity in data consumption indicators. The challenges of last mile connectivity and the need for foresight in anticipating future needs were also explored. The session emphasized the significance of universal and meaningful connectivity in promoting sustainable development.

Agne Vaiciukeviciute

Lithuania is making significant efforts to achieve meaningful connectivity and digitization through a range of strategies. These strategies primarily focus on rural broadband connectivity, affordability, and the promotion of digital skills.

To ensure widespread access to the internet, Lithuania has invested in broadband deployment in rural areas through a non-profit organisation under the Ministry. By leaving the last mile of connectivity to the operators, the country has been able to keep costs affordable nationwide. In fact, Lithuania boasts the lowest prices for end users across Europe.

The commitment to digitisation is evident through the state digitalisation development program, which involves every ministry. This approach ensures that each ministry creates its plan to meet specific digital targets. The digitisation strategy is intended to be horizontal, cutting across all sectors, thereby promoting comprehensive digitisation efforts.

Public libraries play a crucial role in imparting digital education and skills, particularly through their network of 1200 public internet access points across urban and rural areas. Additionally, various NGO initiatives, such as Safer Internet Week, All Digital Week, and the Women Go Tech programme, contribute to promoting digital education and skills. These initiatives aim to enhance digital literacy and encourage women to enter the tech and IT world.

To achieve meaningful connectivity and digitisation, collaboration between the government, private sector, and civil society is deemed necessary. This collaborative approach enables the implementation of digital strategies and maximises their reach to different segments of society. It ensures that a wide range of perspectives and expertise is considered in the planning and execution of these strategies.

Municipalities and regional levels are recognised as crucial players in the digitisation process. They are the closest organisations to the people and hold the potential to significantly affect the digitisation process within their cities. In Lithuania, the majority of initiatives are taken by the municipalities, which highlights their importance in driving digitisation efforts.

Recognising the importance of rural areas, Lithuania aims to extend digital strategies beyond dense cities. It recognises that there is a need to attract and implement initiatives in these areas as well, to ensure that all citizens can benefit from digitisation.

Lithuania ranks highly in digitalisation for public services, as evidenced by its 8th place worldwide ranking according to the World Bank’s digitalisation for the public service index. The country utilises new technologies to enhance accessibility to services, and the majority of services can now be accessed through digital service approaches. However, initiatives like GovTech are also created to address the gap for services that cannot be reached yet through the internet.

The importance of local content and internet accessibility to digital services is emphasised in Lithuania. The country acknowledges that digital solutions should be customised to fit the local environment, rather than being copied from elsewhere. They actively involve civil society, the public sector, and the private sector in creating digital solutions. The successful GovTech project in 2019 serves as an example of this collaborative effort, which resulted in tailored solutions that fit the Lithuanian context.

Collaboration and coordination within the government and stakeholders are crucial aspects of achieving meaningful connectivity and digitisation. By working together, these entities can align their efforts, share resources, and ensure a cohesive approach towards achieving digital goals.

Furthermore, the importance of data quality is emphasised for insightful decision-making and progress measurement. Accurate and reliable data are essential in shaping effective digital strategies and tracking progress towards digital goals.

Lastly, considering the fast-paced nature of technological advancements, adaptability and flexibility are recognised as key attributes. It is important to be able to adapt and adjust measures and strategies in response to rapid changes in the digital landscape.

In conclusion, Lithuania’s multifaceted approach to achieving meaningful connectivity and digitisation encompasses strategies focused on rural broadband connectivity, affordability, and digital skills. Through collaboration among the government, private sector, and civil society, as well as the involvement of municipalities and regional levels, Lithuania strives to ensure comprehensive digitisation efforts. The emphasis on local content, data quality, and adaptability further enhances the effectiveness of these initiatives.

Session transcript

Moderator :
Thank you very much for your attention. I would like to invite the next speaker to come to the stage. Thank you. Good morning, everyone, and good day for those people following online. My name is Deniz Susar. I’m the co-chair of the IGF, and I would like to welcome you to this session. This session is entitled beyond universality, the meaningful connectivity imperative. The objective of this session is to inform the audience how universal and meaningful connectivity is defined, how it can help reaching underserved communities, which are some of the targets and baseline indicators needed to assess where a country stands, and the impact of this policy, and how it can be used to improve the quality of life for all people, including the concept in national policy plans. The session will aim to answer two policy questions. One, how can governments and stakeholders ensure universal and meaningful digital connectivity for all people, particularly those in underserved and marginalized communities, and two, how can policymakers establish robust measurement policies to ensure universal and meaningful digital connectivity for all people? The second question is how can policymakers establish robust measurement policies aimed at achieving universal and meaningful digital connectivity? We have a great panel today that will give us the perspectives from very diverse countries, namely, Lithuania, Brazil, Bangladesh, and we will also hear from the European Commission how they partner with other parts of the world. We will also hear a recorded message from Dr. Cosmas Zavazava, the director of the ITU Telecommunication Development Bureau. After his message, Mr. Martin Shepherd, on my right, Senior ICT Analyst in the ICT Data and Analytics Division, will give a short introduction to the project on Promoting and Measuring Universal and Meaningful Connectivity. Martin, the floor is yours.

Martin Shepherd:
Thank you very much, Deniz. Good morning, everyone. Also, on behalf of the ITU, good morning here in the room, good morning online. As Deniz has mentioned, I would like to start with a small video of our director, Dr. Cosmo Savasava, who would like to say a few words to the audience. Can we have video number one, please?

Dr Cosmas Zavazava via Video 1:
Distinguished participants, ladies and gentlemen, I would like to welcome you all at this Internet Governance Forum workshop on Universal and Meaningful Connectivity. I’m not able to join you in person today, but I’m confident that this session will be very productive given the caliber of the speakers before us. We have a common goal here, to enhance the Internet experience for those already connected, to make it accessible to the 2.6 billion people that are still offline. Our goal is to get everyone connected and enjoy the benefits of meaningful connectivity. We are committed to universal and sustainable digital transformation through our initiatives, research, technical assistance, and tools. Undoubtedly, we can do more working together. Our work at ITU is enabled through partnerships and collaboration. One of the key partnerships of interest to you is the one we recently forged with the European Union. Through this partnership, we aim to adopt holistic approaches that help enhance the statistical capacity of countries to measure multiple… aspects of meaningful connectivity. On this note, I call upon all to work with us to make universal and meaningful connectivity a reality. I wish you successful deliberations at this workshop and thank you for participating.

Martin Shepherd:
And now I would like to shortly introduce the product that was mentioned in the video of Dr. Zavazava on promoting and measuring universal and meaningful connectivity. And again, we have a little video, so if we can have video two please to give a general introduction and then I will say a few words about it. Thank you.

Video 2:
Universal and meaningful connectivity is the possibility for everyone to enjoy a safe, satisfying, enriching and productive online experience at an affordable cost. It enables access to educational resources, health care and government services, job opportunities and much more. Universal and meaningful connectivity is the new imperative to enable digital transformation and meet the sustainable development goals. To meet this imperative, we must also address the measurement challenge. Data tells us where we were, where we are and where we ought to be and enables individuals, policy makers and businesses to make better decisions. The International Telecommunication Union and the Office of the United Nations Secretary General’s Envoy on Technology established aspirational targets for universal and meaningful connectivity to help monitor progress and galvanize efforts. In addition, ITU and the European Commission launched a global project to promote and measure universal and meaningful connectivity through advocacy, measurement and research. To support the project’s advocacy and measurement efforts, ITU’s Telecommunication Development Bureau maintains an online dashboard to track progress towards universal and meaningful connectivity. The dashboard lets countries know where they currently stand, where they ought to be and compare their performance against peers. The dashboard allows us to assess global progress toward each target. Let’s join forces to achieve universal and meaningful connectivity and unlock the transformative power of connectivity for everyone, everywhere.

Martin Shepherd:
If I can have the first presentation, please, then. Thank you. As you can see in the video, two years ago we launched a number of targets on universal and meaningful connectivity, and that’s really the genesis of the project that we’re talking about. So we launched a number of targets as part of the digital cooperation roadmap of the UN Secretary General, but of course, just having targets doesn’t mean anything if there’s no action around it. So we were very pleased that we found a very good partner in the European Union, and we’ve launched a project for three years for three million euros to promote and measure the concept on universal and meaningful connectivity. So basically bringing the targets and the dashboard that you just saw to life. First of all, what we really mean with meaningful is not that we are telling people what they should do once they have an internet connection. It’s the possibility that everyone can go online at any time they want to, in a safe, satisfying, enriching, productive and affordable experience. So the quality of the connection should be good. People should have an affordable data plan with enough data on it. We have a little diagram that shows what is included, what is excluded, what is, in the middle you can see, universal and meaningful. So everyone should have a connection and it should be meaningful for everyone. What we kept out of the project is how to get there and what comes out of it. It’s of course very important, but if we want to go into that aspect, it becomes too complicated. We first want to focus on getting the connection to people and getting a good quality connection to people. going into too much detail of this diagram, but we have some papers that explain it if you’re really interested. What we’re doing in the project is promoting and measuring. So we have three work streams. There’s an advocacy work stream that is bringing the concept to as many policy makers in the world as we can, so that there is an increased awareness of UMC, short for Universal and Meaningful Connectivity. Then we have a work stream on measurement and capacity building, and I’m from the IST Data Analytics Division, and that’s really the bread and butter of what we do on a daily basis, is collecting the data and disseminating the data. But we also have a capacity building aspect in all of this, because countries often need help in understanding which data are important, how to collect those data. So the output of that work stream is an improved data dissemination, but also an enhanced statistical capacity of countries to measure the concept of UMC. And then finally we have a research work stream. Basically we want to do every year a publication, the Global Connectivity Report, that shows us where we are, but also where we should be going, and also how we could be going there, and the expected output is that there would be better policies for achieving UMC. So this is in more detail, these three work streams. The event we are at today is in the advocacy work stream, so that we get the concept out to policy makers. We want to do more events like this. We also want to prepare briefings that policy makers can use, that they can understand what the concept is, how it should work, coupled with websites and social media campaigns. On the measurement and capacity building, we have a large data collection already going on as part of our daily work. We have a UMC dashboard that you saw in the video, and I will show one more slide on that after this one. Then we want to do a number of regional workshops to explain the concept to countries and how to collect the data. We’re going to create an online course for this. We also want to look into new data sources, see if big data can be of help, for example, and how it can be of help. And then finally, in the research, looking for a solution to accelerate progress towards UMC and the Global Connectivity Report. So this is the dashboard. You already saw a bit of it in the video. So we have a number of targets, and every target has an indicator attached to it. So if you’re interested in one specific indicator, you can click on that indicator and you’ll see where all the countries in the world are with respect to that indicator. But if you’re more interested in a particular country, you can just go to that country and then see for all the indicators of that country how they are placed with respect to achieving the targets. They may have met it already, they may be on their way, or they may be far away, or maybe there are no data, which is also an important indication. So that really is, in a nutshell, the aim of the project, and I think it’s now time to listen to the voices from the various countries in how they include UMC in their policies and how they measure it. So thank you, and back to you, Denise.

Moderator :
Thank you, Martin. That was a very good introduction, and I used the platform before, and it’s really helpful. This workshop is hybrid, we are online in Zoom. And we have an online moderator, Mr. Thierry Geiger, head of the IST Data Analytics Division. I don’t know what time is it in Geneva, but he’s there monitoring the chat. So we have dedicated people for this project, which is a very good sign. And if you raise your hand in Zoom, he will make sure that relevant questions are being channeled to us. The first panelist is Ms. Agne Vaiciukeviciute Vice Minister of Transport of Communications, Lithuania. Lithuania is a country that, in a relatively short period, reached most of the targets of UMC, Universal and Meaningful Connectivity. So we would like to hear from you, Vice Minister, if and how policy played a pivotal role in getting there, and also how important digital connectivity to policymakers in Lithuania. Can we learn something from you so that we can pass it to other countries? The floor is yours.

Agne Vaiciukeviciute:
First of all, good morning, good afternoon, and good evening. I like how we presented here in Japan. And thank you very much for having me in this panel. I think that IT did a very good work. I was analyzing the dashboard before I came here to this discussion. And I was elaborating a little bit with my colleagues. So are we so good, as it says in these charts? And we were laughing that, obviously, when we speak about Universal Meaningful Connectivity, it’s never enough. It’s never enough. There is always a lot of work that could be done. So in Lithuanian perspective, we look into meaningful connectivity through several aspects. Obviously, it’s through the broadband connectivity. I think we have quite a unique Lithuanian approach and model. Lithuania model has also ensured the affordability of being connected. because in Lithuania we do have a non-profit organization under the Ministry who basically invests… Behind that organization lies the biggest investments to the broadband deployment and only the last mile is left to the operators. So this is the way how we can keep the… And I speak about rural areas, so this is the way we can keep the affordability prices within the whole country. I think it’s a very successful model and I’m proud to say that we have the lowest prices for the end users in all Europe. So that is one way of looking into. We need to be connected in order to create more opportunities for the meaningful connectivity to appear. And then the other part which we’re really focusing on is our state digitalization development program which is created for the whole country but every ministry has to take a part in it and has to create and initiate its own plan how they will reach all the digital targets which were raised for each sector, if I may put it this way. And now I would like to share with you several aspects of the… Also critical part is skills and knowledge which are very much necessary to take advantage of digital technologies. And there’s another key factor such as involvement of other ministries, other institutions, civil society and so on. One type of institutions that is particularly important for the spread of digital skills and literacy are public libraries in Lithuania. These libraries run a network of 1200 public internet access points. in both urban and rural areas and operating in a decentralized manner, so this is a one of good examples how we try to reach those segments of the society that is not reachable so easily. Also we have a lot of NGO initiatives in Lithuania such as Safer Internet Week, All Digital Week, Senior Online Week, so there’s a lot of NGO work and implications towards creating more opportunities and more skills within the different society groups. And I do believe that there’s also important to represent some particular programs to women and we have a very well-known in Lithuania program Women Go Tech, where people who are already in their career path has a possibility to go into the tech world, IT world, and I think it’s very well corresponds with ITU goals, how to involve more different society as part in the tech IT world to get awareness of how it could be helpful for a better future in the countries. And obviously I did not mention here, but a huge part of it belongs to the private sector as well, so the collaboration between governments, private sector, civil society is necessary here. And from the government perspective, just to highlight one more time what is really really important that whatever digital strategies we would create, that it would go horizontally through all sectors, through all ministries, and I think this is quite common in different countries as well, that there is someone who is owning the policy in the area and then the others are just trying to implement it. So what we are trying to do in Lithuania, that everyone would understand the digitalization process and meaningful connectivity as a part of their job, as a part of their targets, and be not only involved, but actually really a part of the process. So thank you very much.

Moderator :
Thank you very much, Ms. Vaiciukeviciute for that presentation. We have some time now to get questions from the people here around us or from online. Please just show yourself to me and I can pass the microphone to you. In the meantime, I have a question for the minister. You mentioned, you highlighted about sectoral importance of digital strategies. What about at the local and regional level or at the municipality level? Do you have any strategies in that regard?

Agne Vaiciukeviciute:
I think municipalities and regional levels are the most important and key level because these municipalities are the closest and organizations to people. I myself belong to the Council of Lithuanian Capital, so I’m well aware of how municipalities can affect digitalization process within their cities. So I think that obviously, this is the layer which should be distinguished and recognized as a crucial part of the digitalization process. And all these initiatives are taking place firstly from maybe more dense cities and our job is to attract it in more into rural areas altogether. So I think that municipalities level here is playing a crucial role as well. And in Lithuanian case, majority of initiatives are taken by the municipalities in this area as well.

Moderator :
Thank you so much. Let me look around if there are any questions. Yes. Can we please pass the mic? Or you can come there. I think people will be able to see you. Yes. The people online. Please introduce yourself and keep the question short.

Audience:
Thank you. I’m Giacomo Mazzone. I’m one of the co-chairs of the Policy Network on Meaningful Access that will gather this afternoon in the main hall at 5.30. I will finish the advertising break. Two questions. One for the statistical research made by the ITU. I see that you do these efforts, but these efforts are limited to the technical part. And I would like to know if there is any coordination between these two efforts. And I would like to know if there is any coordination while UNESCO, for instance, with the Rome indicators, evaluate other qualitative parts of the offer that is available on the Internet that is equally important. Is there any coordination between these two efforts?And also about the Minister of Lithuania. The question is the assess to the internet does not finish at the moment when you get a connection, the problem is how much of the services and contents in the local languages are made available for general people, are you also considering this in your local policies?

Moderator :
Thank you. Thank you, Giacomo, for that question. If you allow me, Martin, let me give… the floor to first Mr. Vaiciukeviciute about the importance of local content.

Agne Vaiciukeviciute:
Thank you very much. The question was absolutely on good spot, because Lithuania takes a lot of attention on the air governance solutions, you know, that the accessibility to different type of services would be very easy. I think we just recently were placed, according to the World Bank, digitalization for the public service index on the 8th place worldwide. So in our case, I think it’s quite helpful that the country is relatively small. The majority of services could be accessible very easily through the digital service approaches. We highly use all the technologies newly adapted within the society, and you basically can do, especially, you know, COVID, all these processes even enhanced everything so much that now I would say that you can basically do everything through the Internet very easily. And obviously there is a gap between aged people, but I think it’s quite common everywhere, so we need to find different ways to reach them. But the majority of the society is very well aware of the digital solutions and can do mostly 90% of the activities, necessary activities, you know, through the Internet. So this is a very good point, and we put a lot of attention out there. We even have, you know, GovTech initiatives to create more like sandbox regimes, to create more digital solutions, you know, to overtake the gaps within these services which cannot be reached yet, you know, through the Internet. So it’s very successful. project as well, award-winning project from 2019. It showed that it’s a very successful way to involve civil society together, you know, with a public sector and private sector to create some good solutions that would fit Lithuanian environment, you know, because you cannot just take it from somewhere. You need to adapt to the circumstances you live in. Thank you.

Moderator :
Thank you very much, Vice Minister. Martin, quantitative versus qualitative.

Martin Shepherd:
Yes, that’s the short summary of it indeed. Yes, the ITU indicators, I mean it’s not just technical, it’s quantitative, because it’s not only about the pipes, the internet, the subscriptions, but it’s also about the use of it. So we have two types of indicators, the supply side indicators and the demand side indicators, and in the demand side we have how many people use the internet, what they do when they use the internet, how they feel or how the connection is, the activities, the skills, etc. The UNESCO indicators, they use also our indicators, there’s a part that is really in common, and there’s a collaboration in the sense that they use our indicators, and at the beginning of the project we also have been talking together about this. But they go much further, they have many more indicators, and a lot of them qualitative, which doesn’t really fit very well in the type of work that we are doing. We like to collect quantitative indicators. So UNESCO goes further, and what they do is also more for an individual country assessment, where you really need to go to the country and do the assessment, whereas we just use the indicators that we collect for all the countries and put them there. there together. So I think the efforts are complementary. We are coordinated, but maybe not perfectly. But I know now there’s a project going on in the Pacific where ITU is directly involved with UNESCO. And again, our data are free to use for everyone, and UNESCO uses them, and we certainly work together in that direction. There’s the Partnership on Measuring IST for Development, which is an international collaboration between international and regional organizations, and both ITU and UNESCO are both members of that partnership. So in that framework, we’re also coordinated. Thank you.

Moderator :
Yes, thank you, Martin. Not only UNESCO, but UNDES also uses your indicators in our e-government survey. We’ve been using it since 2003, so that’s very helpful. We may get one last question to the minister, if there is.

Audience:
Please go ahead. Hello. My name is Nils Brock from RISE America and DW Academy. And I have a question about the methodology. So with the local network initiatives working a lot about the needs of communities, and I saw this as a category in the methodology, which is really great, I was wondering when it comes to services, so services that give meaning to specific connectivity and how far this is measured in your categories or in how far, and that would be the second question, if it’s not foreseen as such, it is possible to enhance the statistical capacity on this side because this was mentioned, and also the identification of new data sets. So maybe also a question how far this methodology is open also to add, or if it’s now also for the timeframe closed in terms of the categories that are established. So, I think, from a community perspective, broadband access is maybe not an option for many rural communities in the next year, so the services that are meaningful can depend and can be quite different, because big platforms or data-hungry applications may be just not giving meaning because they’re not accessible, so what from their view gives meaning to connectivity, so turning around the question a bit, yeah, thank you.

Moderator :
Thank you, thank you so much, I think this is a question for ITU.

Martin Shepherd:
Yes, thank you for the question, it’s a type of question that we had before, but we’re actually staying away from this a little bit. I mean, to start with, when you saw communities in the framework, but communities from the point of view of universality, that means we want all communities to be connected to the internet and using the internet. The problem is that we don’t have any good indicators there, because there’s no good quality data on what a community is, how communities are using the internet and how to survey that, that is very difficult, so we have to do with proxies there. As to the services run, done on the internet, what people do on the internet, we actually quite specifically on purpose left it out. We had a very long discussion about that in the beginning, and everyone was saying it’s very important, local content is important, and I fully agree, local content is extremely important, but once you start there, then you have to take choices that we don’t want to take. E-agriculture is very useful for farming, e-learning is very good for people to get skills, absolutely true, but does that mean that we don’t like people to watch a video on YouTube while they’re waiting for the public transport to come? So to really put a… a value on what people do on the internet that is too hard for us and also to draw the barriers. So we decided to focus on people should be using the internet or if people want to use the internet they should be capable of using the internet with good quality infrastructure, in an affordable way and in a safe way. And then what they do on the internet and what the impact of that is, those are extremely interesting and important questions but we kept them out of our focus because otherwise our focus would be too wide and we wouldn’t be able to do anything meaningful, if I may use that word.

Moderator :
Okay, thank you Martin. So I would like to now move to the next speaker and I think for the second question I also encourage we can discuss after the session in more detail with the person who asked the question. But let’s now go to Brazil, Mr. Alexandre Barbosa, Head, Centre of Studies for Information and Communication Technologies, CETIC.br. How can solid data inform policy makers on where the country stands with respect to the use of the internet and where are the digital divides in a country and with the vulnerable groups? So, Alexandre, you have seven, eight minutes.

Alexandre Barbosa:
Thank you very much, Denis, and thank you the previous speakers for giving and setting the stage for what I’m going to speak about the Brazilian case. May I ask our colleagues from the technical support to, yeah, thank you very much. Well, good morning everyone. The intention here today is to share a little bit how Brazil is adopting this concept of meaningful and universal connectivity and put this concept into practice and how we are measuring this concept. c So, I think it’s important to mention that in Brazil, there’s a lot of policymakers and regulators, and we need to understand how we are putting that into practice and how we are measuring this concept. It is important to mention that in case of Brazil, policymakers and the regulator, policymakers from the minister of communication and the regulator in the country they have embraced this concept of meaningful and universal connectivity. And from our side, ITU statistical data provides a measurement to policymakers. Let me start by saying that for us to understand how we can move from the previous concept that we have in terms of digital divide, because we are not more interested in having or not having connectivity. We are more interested in providing a meaningful use of internet and enhance the internet experience for the users. So, this concept of meaningful connectivity and universal connectivity is a critical concept to understand how we can achieve broader objectives in the digital age so that we can understand not if we are connected or not connected, but what are the digital inequalities, and how we can bridge the existing gaps especially in terms of device and skills, safety, etc. So, this concept allows us to understand how we can reduce inequalities, not only in the access, but also in the use of the internet and digital skills as well. So, it is a critical concept.This concept also helps to policy makers the needs and existing gaps for more sustainable development so that we can assure that no one is left behind in the digital era, and through this critical concept, we also can highlight the need to address these issues within a more comprehensive global digital cooperation framework. So, having said that, it is important that we understand inequalities in three different levels. So, the first one is the quality of connectivity, and the quality of connectivity is very important. So, if you have access, not only at the household level, but also individual levels, and the quality of this connectivity is really connectivity that provides high speed, that there is no data gaps. So, this is very important. And also, what type of device are you using? So, the second level, we are talking about the quality of access, and the quality of broadband, most low-income households, they only access the internet through mobile devices, which is really limited. So, in this first level, we are talking about infrastructure, connectivity, and quality of access. In the second level, we are talking about the quality of access, and the quality of access, and the quality of access. So, what are the barriers? What are the barriers or motivations? And, at the third level, which is more, I would say, proficiency, proficient usage of internet, we are talking about really tangible outcomes, like content creation, and promoting well-being through the use of internet. So, what are the barriers? So, the third level, which is more, I would say, proficiency, proficiency in internet, and promoting well-being through the use of internet. So, what are the barriers? So, the third level, which is more, I would say, proficiency in internet, and And then mobility, we are talking about how a user income, we are talking about the robots modulating the mobile RROTM uh behaviour, understanding educational levels, understanding socio- economic income, age, gender, and many others socio-economic variables. And the maximal level we are talking about offline networks, the communities, and the neighbourhood effects. And then we are talking about the social impact of the mobile RROTM, the social impact of the mobile RROTM, the social impact that will affect these users. And at the macro level, we have other components which is very important, mainly related toSo, having said that, it is important that we understand inequalities in three different levels. At the level of infrastructure, I mean connectivity. So if you have coverage, access not only at the household level, but also individual levels, and the quality of this connectivity is really connectivity that provides high speed, that there is no data gaps. So, this is very important. And also, what type of device are you using? I come from Brazil and although we have a high penetration of broadband most low-income households, they only access the Internet through mobile devices which is really limited. So, in this first level, we are talking about infrastructure, connectivity, and quality of access. In the second level, we are talking about the usage, what are the digital skills, what are the barriers or motivations? And, at the third level, which is more, I would say, proficiency, proficient usage of internet, we are talking about really tangible outcomes, like content creation, and promoting well-being through the use of the internet. And the level of analysis that we can provide through the adoption of this concept of meaningful and universal connectivity we can analyse on the micro level. Including individual demographics, understanding educational levels, understanding socio-economic income, age, gender, and many other socio-economic variables. At the mezzo level we are talking about the off line network, the communities, and the neighbourhood effects. If you are user in a poor community, or in a remote community, what are the neighbourhood effects that will affect these users? And at the macro level, we have other components which is very important, mainly related to regulation, competition, coverage, and affordability. Those are variables or dimensions that will affect this concept of meaningful and universal connectivity. I tihink Martin gave already a good overview of the concept and I’m not going to discuss this again, but what I’m going to do here from this slide onwards is to understand what are the universality metrics that we are using to describe how people are using, how households are connected, community, and business, and in terms of connectivity enablers, I will share with you indicators that we are using to discuss infrastructure, affordability, device, skills, security, and safety. So I will start now showing some indicators that will cover those universal and meaningful dimensions of this concept. Here, you have the universality metric related to people. So, as you can see, we have a very long track history in terms of data series in Brazil with the launch of the partnership on measuring ICT for development, which was in 2004. But since then, Brazil has every single year several surveys, not only household, but also others. But I’m just showing in the last eight years what happened in terms of Internet users in Brazil. You can see that we had a very significant growth, but we still have some challenges in bridging some gaps, like, for instance, the urban-rural gaps. Rural areas is 10 percentage points below the total access. If you move only for urban areas, the proportion is still higher. And we can see that in terms of households, this universality metric is now โ€“ I moved from individual to the household level โ€“ we have 80% of the Brazilian households covered by broadband connectivity. And also, this slide is very interesting to see when you have disaggregated data, what type of analysis that you can provide to policy makers to design effective policies to address specific issues, like socioeconomic gap. The low-income households in Brazil, which represents the majority of the population, we have only 6% of households with Internet access, whereas in the high-income households, you have it already universal, 98%. In terms of infrastructure, now I call your attention for the infrastructure as a connectivity enabler for meaningful and universal connectivity. Here we have unequal penetration of fixed broadband households in Brazil. If you go to the south and southeast regions of the country, which are the richest part of the country, we will have most of the households having broadband connectivity, but if you go to the Amazon forest region, we have a higher proportion of connectivity covered by radio or satellite. In the same in the northeast of the country.

Moderator :
Alexandre, you have around two minutes. I know the indicator will go very fast.

Alexandre Barbosa:
Yes. Okay. So, in terms of… So, this again shows the equal penetration of fixed broadband in Brazil in terms of urban and rural areas by different types of connectivity, like fiber, radio, satellite, etc. Someone has mentioned about connectivity networks. We have conducted a study specific on community networks in Brazil that also provides policymakers with important insights to design policies in terms of meaningful and universal connectivity. In terms of, again, connectivity, fiber optic in Brazil has really grown very fast in the last years. In terms of affordability, I just would like to highlight that high-income households spend over 30 times more on ICT services when compared to low-income households. In terms of major use in mobile, you can see again that low-income households use a very large proportion exclusively on mobile phones. This is almost the same, but with another segregation. Here it’s important to mention about skills. This concept is skills plays a very important role. connectivity. So, in this case, you can see that the internet is not used in meaningful connectivity. If you don’t have digital skills, you are not going to use the internet in a meaningful way. And here again, you can see a comparison. What happens when you go to activities performed online? So, in this case, you can see that the connectivity is very, in lower proportion. Here, again, related to the use of access by mobile and by computers or by both. And what the message this data shows is that when you are internet users and you are mobile users, you have lower proportion of digital skills. And here is a very important mention reinforce what I have said, mobile users, they develop less sophisticated activities when compared to computers and mobile phone users. I’m going to reach the end of my presentation just to show that Brazil has a very strong commitment to privacy and data protection. And I would like to highlight the fact that we have a private and personal data protection survey to measure how individuals and organizations are aligned with the Brazilian law on personal data protection, and here, I would like to highlight the fact that stakeholder engagement and cooperation is really very important in this process. in terms of defining and implementing these measurements. And my last message is that indicators for measuring universal and meaningful connectivities, they are critical and we need to have them in a desegregated format. We may have different level of desegregation based on different variables. And the target for meaningfulness use of Internet may change over time. What is good today may not be enough tomorrow. So, again, national average without desegregation may not be able to capture inequalities in the country. And I think that my last message is that ITU plays a very important role in fostering the increase of data production among member states. We do have a lot of data on infrastructure, but low data availability in skills and other key dimensions such as security, safe to use as the concept impose those dimensions. Thank you very much, Denis, and sorry for taking longer time. Thank you.

Moderator :
No, no, no problem. You took time from your Q&A section. But thank you. This was a very comprehensive presentation and you mentioned many things. I will again invite people in the room to ask you one question. And if you have a question, please, you can walk up to the mics behind me and you can ask your question. Yeah. While waiting for questions from the room. Let me just thank our online moderator, there is Catherine Townsend from Measurement Lab. You can also, Catherine, if you would like to ask a question, you can ask after the question in the room. And also I would like to read very quickly, these are not taken from Alexander’s Q&A time, we received a feedback from Bangkok ITU working on the Smart Island initiative in Asia Pacific, that one of the things they are establishing is universal service obligation policy, if people are interested in to learn more. But yeah, let’s take the question now. Hello, good morning, thank you very much for the

Audience:
presentation. Carlo Rey Moreno from the Association for Progressive Communications. A really interesting effort, I mean, moving from universal access to meaningful access and all the indicators and the work that has been done there, coinciding very much with Alexandre and the presentation on the need of accuracy, not accuracy but maybe granularity, because if we have advance from 75 to 95 percent and a target being met from 95 to 100 percent, you are leaving a lot of people behind on that granularity. There is indicators such as affordability that is looking at a very low amount of data. There are many studies that say that the trend of consumption of data is going higher and higher, and those indicators are remaining a bit low if we are considering how much data people are going to be using in 2030, and the affordability for that and the granularity when there are huge inequalities inside countries. That would be one thing. The other thing, whether there is an opportunity to reconsider those indicators. The other one would be about the accuracy of the data. There are exercises by civil society and universities, at least in Malaysia and Nigeria, that I know that that are creating tools to challenge the data, in particular around coverage and around the quality of the coverage in rural and remote areas, because it’s way less than the one that is being reported. So what is the quality of the data that you are using? What is the source? And how could we work together into improving the quality of the data that is being used to actually measure against the indicators that you are using? Thank you very much.

Moderator :
Thank you very much for that question. I see we may have one more question. Let’s also take that one, and then Alessandra answers two of them at the same time. Great, thank you.

Audience:
Hi, everyone, my name is Farzana Badi. I’m doing some research for USAID, and they are working on human-centered approaches to digital transformation. So I was wondering if in your definition of meaningful connectivity, you also consider this kind of human-centered approach, and if there are ways to discuss with the different communities that you work with and build the network, what their needs are, and just looking for best practices and best approaches about that. Sorry if it’s not relevant, but I thought I’d raise it.

Moderator :
No, thank you, it’s very relevant. Yeah, Alessandra, first you, but if the other panelists would like to respond to second question, they can.

Alexandre Barbosa:
Yes, well, thank you for both questions. The first one related to data quality. This is an issue we have been discussing, discussing this issue of data availability and quality in the international forums, such as the ITU Forum, which holds two expert groups on ICT statistics. In case of Brazil, CETIC conducts, so those are primary data. We follow strictly the international methodological recommendation with a probability sample. representative sample in Brazil, and the sample is designed so that we can provide disaggregated data in the variables that was foreseen in the design of the sample. So those data are high quality data and the months, most of them, the monthside data, but every member state also provide supply data, statistical data, on the offer and coverage of infrastructure, and those data are particularly provided by regulators. So regulators compile the data from the operators and submit it to ITU, and the data that ITU, maybe Martin can speak with more more legitimacy on that, but member states provide consolidated and aggregated data, but every member state should be able to disaggregate that data so that we can understand inequalities at the different levels, education, age, gender, regional, urban versus rural, so the quality is a very important issue. The second question related to human-centered approach. This is a very relevant question, but it is not considered in this design of this, or implementation of this concept. I don’t know if Martin

Moderator :
wanna, considering the time, Martin you can respond briefly. Yeah, thank you for reminding. So can we go to Catherine online? So Catherine Townsend, if you are still online, please take the floor, and I don’t know if the online moderator can help us. to give, to make Catherine and also Anir Chowdhury a presenter. Hi, Thierry here. Catherine cannot unmute herself, so I can now. Okay. Oh, there you go.

Audience:
Great. Thank you all so much. So yeah, Catherine Thompson. Thank you for this session today. And I run a nonprofit called measurement lab, which provides the largest open data set about the speed and performance of the internet and interconnection points around the world. And, you know, primarily the way that people experience how well their services is by running a speed test and speed is not well defined and it’s sort of an imperfect proxy of what a user’s experiences. And there’s a lot of investment right now in broadband infrastructure, and even broadband is not sort of a universally recognized goal of what connectivity should be. So, all this background to say that we’re trying to improve our own metrics for what means meaningful connectivity or since you all have to find this that we have this internet quality barometer. And so I wanted to ask you all, when you think of additional information that you would have liked to have in developing the meaningful connectivity metrics, particularly those, the technical community could add to and support. You know, what are the gaps and what are the sort of specific measurements that you would hope to say. Thank you.

Moderator :
Thank you, Catherine. I think all the panelists can think about that question. What component of meaningful connectivity do we think we can get more help from the technical community. So, respecting time, I will go to the next panelist now, European Commission, Mr. Peter Marion, Director General. He will be talking about Global Gateway, which is a program through which the EU is strengthening connections between Europe and the world. So let’s hear from that, but please keep in mind Catherine’s question, all panelists, and we will go back to Martin as well.

Peter Mariรซn:
Thank you, thank you very much, and maybe we can come back to the questions a bit later. So my name is Peter Marion, I work in the European Commission, Director General International Partnerships, and I am Head of Sector for Digital Governance. So first of all, I’d like to underline the EU’s support to this development and further development of this work on universal meaningful connectivity. We started thinking about such needs a few years ago, and we were also inspired by the EU’s approach, which is, amongst others, the Digital Economy and Society Index. It’s not the same thing, but having robust data, we do consider that as an essential condition to be able to measure what we and our partners are doing, whether we are achieving objectives and, of course, to set policy to achieve those objectives. And as I could see from previous speakers and the questions in the room, this need for data is essential, and we are very happy to work with ITU on this. So I’ve been asked to elaborate on the EU’s experience with global partnerships and also with the global gateway. So I will go through a few slides in that context. So the Global Gateway is an opportunity for partnerships between the European Union and partners around the world. The European Commission has foreseen investments of around 300 billion euro in the next couple of years. These investments will come from European Union grants, but of course, as was also mentioned by other speakers here, of course we do this also together with the private sector. And that also includes banks and the financial leverage of various banks and financial intermediaries. The Global Gateway intends to be a principle-based cooperation mechanism which focuses on a few sectors and one important sector is digital. So when we look at digital, our policy will cover elements such as focusing on government side, business side, infrastructures and skills. Let’s say that’s the large compass that you can see on the slide. But then when we go a little bit deeper, how does the Global Gateway differentiate how is this an alternative offer of what is available to our partners? I’d like to point out the following elements and this also resonates with what was said by previous speakers and people in the room. So EU promotes a human-centric digital transformation. We put the person at the center, not the companies, not the states, and this then reflects itself in the policies. We want a trusted Internet, an open and free Internet where people can feel safe and secure, where their privacy is respected and which then leaves nobody behind. to bridge the digital divide. I will go a bit fast because I know we’re short in time. We want to increase resilience. Security is important. You can also see this in the universal meaningful connectivity indicators, the aspect of security and safety. Boosting digital sovereignty is essential. It’s, of course, a very delicate topic, but we also think that this is something where the EU can, where we hope to be an equal partner and a trusted partner with our partners, where we think of the interest of all the parties involved. And this is important when it goes, when we talk about, for example, data governance amongst so many things, privacy. I mentioned the open internet. And then, of course, promoting the twin transition. For us, this means to focus on environmental sustainability. So, the global gateway is about investments, but these investments are in hard infrastructure and in soft elements. And they are supposed, they’re meant to be sustainable. And sustainable means transparent, transparent funding, but also sustainable funding and environmentally sustainable amongst others. So, now I’ll just jump into some examples. In March of this year, in Colombia, the EU launched the EU LAC Digital Alliance with a whole range of Latin American and Caribbean countries. This alliance has different elements. On the one hand, there’s a policy aspect to it. And on the other hand, you know, with regular policy dialogue on a range of topics, but also with concrete action. And so, this action is already ongoing. So, I’ll run through a few examples. As a first element, we will work on policy and regulatory frameworks together. that includes policy on connectivity, on e-governance, on data governance, on cybersecurity and probably soon we will, I mean we are already discussing to have regulatory framework discussions on the topic of artificial intelligence. Of course this all links again to this universal meaningful connectivity indicators because it’s not just about investing in hardware. Another example of this is the expansion of the BELA program. The BELA program is a fiber optic cable between Europe and Latin America. This fiber optic cable, I’ll come back to that later in one of the next slides. Third points, the private sector was mentioned today, so digital transformation without a private sector is a no-no, I mean that will not work. So in the EU Digital Alliance there is also the setup of what we call the digital accelerator, where basically we intend to set up about 100 new joint ventures and about 50 startups. And then as a fourth point, I’ll just mention something about Copernicus, which is Earth Observation System, so satellite data. So the BELA link on the left image, you see the current in blue, the current cable as it runs, and on the right-hand side you see the proposed extensions of this cable as we are speaking. Now this seems like we’re talking about hardware, but, and it is, but this program also connects more than 1,200 academic institutions, and in that sense again it relates to this universal meaningful connectivity indicators. Of course it also relates to affordability and so on and so on, but at the moment there’s a big focus on the academic link. Okay, I mentioned Earth Observation. Just to say that one of the questions is, okay, but what do you do with connectivity? Well, the European Union Copernicus system is a set of satellites providing earth observation data. It’s, I’ve been informed, it’s at the moment the most advanced system that there is. Well, this system gives open and free data to any individual who wishes to access this data around the globe. And so this can be used for a whole range of policies. We are working to set up a local data hub in Panama, and also one in Chile. But I won’t go into more detail right now. And that, of course, can, you know, lead to policies and all kinds of other economic and social impacts. The universal meaningful connectivity indicators also look at safety and security. So also in the EU context, Latin American, Caribbean, we are working on cybersecurity. There’s a whole range of actions that we are doing on cybersecurity, policy regulation, critical infrastructure protection, capacity building, but also the mainstreaming of cyber in our programming. So, you know, the whole topic of secure and trusted connectivity has been high on the agenda since some time. And so we take this into account with all our programs. And then as a concrete example, in the Dominican Republic, we have set up a regional cybersecurity hub for the whole region, together with our partners, of course. So there I gave previously an example of a regional alliance, because let’s talk about partnerships, EU luck. But we also work, of course, at bilateral level and at other levels. At bilateral level, I just want to point out the concept of digital economy packages, where, as you can see on the slide, we want to link our partnerships and investments in infrastructure with investment. and partnerships on the soft elements, and the soft elements, for example, on data governance, on digital skills, e-government, amongst others. And so concretely speaking, this slide gives an example of our cooperation with Kenya. So in Kenya, you can see below on the slide a whole range of European Union countries that are coming together in a Team Europe spirit, together with the European Union institutions, and we provide a package of digital action with Kenya. And so in total, for the next couple of years, we’re looking at 430 million euro. And I won’t go into too much detail here, because of the time, but you can see that we’re looking at reducing the gap, leapfrogging, but also assuring an open and exclusive governance. And with the arrows, I wanted just to highlight some elements related, again, to universal meaningful connectivity. So on the left you see last mile digital connectivity. This was also mentioned today. What about this rural? What about, you know, the most vulnerable communities? It says they’re, you know, expanding the network and the fiber. So this is also about submarine cables, but also terrestrial and last mile. Looking at TVET, so that’s vocational education. We are working with hundreds of TVET institutions in Kenya on the scaling. And on the right hand, for example, you see data protection and procurement legislation and cyber security. So this is again about policy and other elements which can impact affordability and safety and security. The digital package, as we call it with Kenya, was announced last week in Kenya by our Commissioner. This is the tweet she put, and as you can see in the tweet she mentions remote places and unprivileged areas and leaving nobody behind. And I would like to finish with with that, thank you very much. Thank you.

Moderator :
Thank you very much, Director General. That was a very rich presentation, and again, I think all the presentations will be online who would like to access later. I think we have time now for one question for the Director General. If you have a question, please stand up and ask your question.

Peter Mariรซn:
If you’d like to add. Thank you very much, I just want to specify I’m the Head of Sector, not the Director General, but thanks for the promotion.

Moderator :
Thank you. Thank you for that clarification. Martin, you did not answer to a previous question.

Martin Shepherd:
Yes, thank you, Denise, there’s still a few questions from the previous sessions where I own answer. So human-centered approach, that actually, I’m sure it’s not really the answer that you want to hear, but we also have a human-centered approach. So we look at, we want people to be using the internet. We also have, we’re not focusing on business, but on people and everything around people and communities. There were a few questions on the quality of the indicators on the targets themselves, if they can be changed, that the targets are moving. Yes, we have an initial set of targets, but the idea is that to keep on monitoring this and also to annually look at the targets as part of the project and see where they need changing if we need to have different targets, if you have to move the targets or if you have to include other indicators, maybe. There’s a few areas where we’re not so strong. Safety and security, we don’t have any good indicators at the moment. The question from Catherine online is also one that’s very important and very good. We do have a target on speed. But we don’t have good indicators yet, so we are very happy to talk to you and see how we can maybe include your work in ours, or how we can move together. We’re also looking at alternative sources of data. We have a number of big data products going on in our division, and that can help in getting better regional data in countries, for example, which was also one of the comments. The comment was that in rural areas, the reality on the ground is maybe not the reality that you see in the data, and the data that we get. And we get the data from the regulator. We do process the data, but it is possible, of course, that the reality on the ground is not exactly the way it’s perceived in the data that we get online. So for that, the big data project, the measurement lab data can all help in moving forward there.

Moderator :
Okay. Thank you, Martin. Is there any question for Peter online? If not, let me move to the next panelist, Mr. Anir Chowdhury, Policy Advisor, A2I Programme Bangladesh. He is online. I believe he is in North America right now. So we are now moving from a large country in Asia. It’s still a considerable journey to go towards UMC, but where connectivity is considered very important. So we pose the same questions to Mr. Chowdhury. Anir, if you are online, the floor is yours. And I think we may… may need to upgrade Anir’s level in Zoom so he can speak. Yes, so Anir can still, cannot unmute himself if the IS moderator can give him the right to unmute. And yeah, thank you. Okay, good.

Anir Chowdhury:
I think I’m on right now. Dennis, can you hear me?

Moderator :
Yes, Anir, the floor is yours.

Anir Chowdhury:
Wonderful, thank you so much. I’ve been listening with a lot of interest what other speakers have been saying. And thank you for giving me the floor. In Bangladesh, we’re actually seeing a surge in terms of internet usage, but still not at a point where we’d like it to be. To increase internet penetration, what we have done in addition to the cell phone providers, the MNOs, which have covered 98% of the country with 4G, the availability is there, but in terms of access, it’s still lacking. So I’ll come to that point. But in terms of fixed broadband, what we have done is we have worked with the private sector, three national transmission providers to connect about 3,800 rural locations. So these are the lowest tier of local government institutions in the country. So about 3,800 rural locations who have connected with fiber in the last few years. And another 700 plus locations, which are hard to reach, the island areas, the hilly areas. So we have used the service obligation fund, which is a percentage of the profit that the MNOs deposit with our regulators. So we’ve used that fund to. connect another 700 plus locations with fiber or radio connectivity. We have a new project that started just last year, where we are going to connect about 110,000 institutions with fiber. These will cover all the government offices at the lowest tier. Almost all the schools, primary and secondary schools, you’ll see the data shows that the school connectivity is quite low. This will remedy that situation. We’ll also connect the courts, about 2,000 courts across the country, the 14,000 plus health facilities in the rural areas. A lot of the post office, about 8,000 plus post office will be connected. About 110,000 institutions across the entire country will be covered. We’re expecting that to be done. The coverage will be done in the next few months. These are the 110,000 institutions, but that still won’t have fiber coverage to the homes. That’s where we are expecting the ISPs. In the country, we have close to 2,000 ISPs, about 1,000 of them actually work in the rural areas. They’ll be extending connectivity to the rural locations. Now, that brings me to the question of cost. Our regulator actually has capped the cost to an affordable level. It’s just been, the fiber has not been extended to the rural areas. The cost of 4G is at an affordable level for the most part. We have many different packages, packages that run for three hours, packages that run for three days, seven days. So there are many different types of packages that the telcos have provided that have quite an affordable cost. The cost has been brought down many, many times in the last few years to a point that it’s affordable. But what is one of the biggest bottlenecks right now is the… availability of smartphones. The smartphone penetration is only about 52 percent in the country. Even though broadband is available, I won’t say it’s at a point where meaningful connectivity is there because the devices are not available. Devices are still expensive compared to the per capita income in the country. Add to that the issue of digital skills. Just today, I just finished a conference at MIT Innovation Lab just a couple of hours ago, and we were discussing what does digital connectivity and what does digital skills mean in the near future. The issue of connectivity will actually become more and more important. As we know, in Bangladesh, the official figures that I see at the dashboard here is quite low. Within the country, we have a higher figure when we add the mobile connectivity. It’s about 70 percent Internet connectivity that we talk about today, and this one actually is talking about less than 40 percent, this official figure. But because of low smartphone penetration, we actually don’t see meaningful usage of that broadband. From a skill standpoint, what we discussed today at MIT is that, what will the skills requirement be in future? Today, we talk about the ability to use a keyboard. It could be a small keyboard on a smartphone, or a large keyboard on a laptop or a computer or a desktop that people must use for us to say that that person is digitally literate. But in future, and that future is not too far, we’re actually seeing the emergence of large language models in AI and ability of current digitally illiterate users to use computation just using native languages. We actually are deploying just in the next week, we’re deploying a a large language model in the native language Bangla to augment our national call center. So a lot of the questions that our national call center operators have been answering for the last three years will now actually be answered by voice bots and that technology will mature over time. So what will happen in the next two to three years is actually we’ll see a lot of native languages around the world start to use large language models and the digital literacy concept will be totally redefined. So that’s a very important aspect for us to think about which I don’t know if you’re thinking in the policy debate right now. We just completed just a couple of months ago completed a research called equality research, E-quality research. And that looks at the issues of digital divide. Obviously connectivity or meaningful connectivity is one of the primary areas of digital divide. Two thirds of the population are connected globally. One third is actually not even 2.6 billion people are not on the internet. In Bangladesh, very similar numbers. One third is not on the internet. The second issue is the digital literacy that I talked about which will be completely redefined because of AI and large language models. And the third issue which came up in our research and we don’t talk too much about is the issue of content and service design. The way we have designed it for the ultra poor, the way we have designed it for the persons with disabilities the way we have designed it for the women, the way we have designed it for the CMSMEs, the cottage micro, small and medium enterprises. So when I said the way we have designed it we actually have not designed it for these clientele. We have not designed them for the hardcore poor, not designed them for the persons with disabilities, not designed them for the women, not designed them. for the CMSMEs and that’s where a lot of our attention needs to happen. So even if we achieve full coverage in terms of internet connectivity, affordable internet connectivity, the digital skills will be solved with AI to a large extent. If we don’t design the services in a meaningful way, then we will not actually get to the point of meaningful connectivity because people will not be able to use the content and services meaningfully to the best of the advantage. We’ll still actually widen the digital divide. So that’s something I think we need to bring in our discourse that the research report that we published for Bangladesh just two weeks ago at the UN General Assembly, the equality report only looks at the Bangladesh perspective, but that is similar to many perspectives of the LDCs and the countries in the global south. We hope to extend that research to the equality center that we also launched two weeks ago. We hope to extend that research to other LDCs in the next few years with the support of organizations at the UN, World Bank, World Economic Forum, Commonwealth Secretariat, and many other partner organizations that we are working with. Thank you very much, Dennis.

Moderator :
Thank you, Anir. This was really valuable input to this discussion and I know that it was not the ideal scenario for you to connect remotely, but we very much appreciate it. Let me first turn to the room and also the online participants if there are any questions for Anir. Questions? Yes, please.

Audience:
Hello. Thank you for the presentation. My question is, if we take the concept of meaningful connectivity also sometimes touches on the idea of having connectivity everywhere, not just limited to some places or some countries. So, I’m just wondering if you have any some places or when it’s connected in work, in schools. So how do you, in Bangladesh, deal with these two options of the policies? So enforcing, enhancing connectivity in institutions. So the idea is to foster connectivity in institutions, of both in households and institutions, or how to connect both places, and how do you deal with this in your policymaking in the field? Thank you so much.

Moderator :
Anir, would you like to briefly respond?

Anir Chowdhury:
Sure. In my deliberation, I talked about those two issues. So one is the institutional connectivity. We have extended fiber connectivity to the rural areas, but not to the institutional level. So these are 4,500 plus rural locations that we have connected as hubs of connectivity that will be branched out to the institutions. And as I mentioned, about 110,000 institutions will be connected in the next few months. That includes offices, schools, health facilities, courts, and so on and so forth. So that’s one fiber connectivity. That’s a public-private partnership with the three national transmission providers and over 1,000 ISPs. So that’s what will happen. And we’ll actually get fiber to households also through affordable price. And we’ll set the price to a regulator. But there is also the wireless connectivity that is going through our telcos, 4G connectivity. But as I mentioned, even though about 98% of the country has 4G network, only about half of it is actually being used because of lack of devices. So affordable devices and the right design of the services and content. So when we are able to address the device costs, affordable to the right persons and households, and we’ll be able to design the services and content. a way that makes sense to the currently marginalized population, that’s when actually these issues will be resolved. There are policy matters that we’re actually addressing, technology that we are deploying, skills development that we’re also going through. But again, as I mentioned with AI, the skills development will be a thing of the past in the next few years. Does that answer your question, sir?

Audience:
Thank you, Anir. Yes.

Moderator :
Are there any questions in the room? No. We are coming to the end of our session now. Before we close, I would like to give one last opportunity to all speakers. If there is anything that you couldn’t pass it out, maybe Alessandro, we can start with you, but please keep it one or two minutes so that people watching us online, they can make reference to the final point.

Alexandre Barbosa:
Thank you very much, Denis. Well, I think that this concept of UMC is really important at this moment that we’re living in where this information and the lack of skills to content creation, critical use of Internet is of utmost importance. What my message is to countries to really produce data that can be used to measure the progress towards achieving this concept. So Brazil is maybe among the countries that produce many statistics in this area, and I think that what we would like to see is other member states also producing data that will allow track the progress.

Moderator :
Thank you. Agne, would you like to add something?

Agne Vaiciukeviciute:
Thank you very much for such insightful discussions and messages so far. I will just add up several aspects from our side. I think what is really important as in all discussions was highlighted collaboration and coordination within the government, within the other stakeholders as well. This is a huge part of making it happen. And of course, I want to reflect on one of the questions from the audience. I think what is really important is not only the quality of data, but that all these measures would be checked once in a while if the measures are good enough at that time. It means that everything is changing so fast and we need to be adaptable, we need to be flexible in the way we approach the measures. And of course, the backbone of everything is data quality. And I bring this message back to myself as a government official every day, how to make sure that we would gather more quality data to make more insightful decisions and then to measure the progress. Thank you.

Moderator :
Thank you. Thank you so much. Peter, please.

Peter Mariรซn:
Thank you very much. After these excellent comments, it’s hard to add many more, but just a few. I think for me, this kind of discussion confirms also our need and our commitment to this kind of work. Because, you know, I think a lot of the community in organizations like mine, we are very much focused on making things better. things happen in the field but we do need to do the basic homework to know what it is we need to do and in that context I think you know again having the data and everything that was said about this remains so important and of course we know also how difficult that is within the countries in the field you know in our countries in our partners countries so it should be done at global level but of course it needs to be done in the end at the local level maybe just to add also yes but what I took away is that the challenge of the last mile connectivity or whatever you want to call it and we are actually also struggling with that quite a bit how to make that interesting for the private operators or for the public operator then but you know under which philosophy do you spend taxpayers money on that if you would and then what I also like very much is are these questions about foresight what will we need in five years or in ten years or in or beyond that whether it’s with new technologies coming up and the skills that that you will need or the systems that might be completely different or who knows indeed other indicators and measurements that we will need to look at thank you.

Moderator :
thank you Peter and Anir let’s go back to you for one or two final messages that you would like to pass thank you

Anir Chowdhury:
Dennis again I’ll go back to the equality report that we published for Bangladesh I think that that report has given us a deeper insight to the issue of universal and meaningful connectivity and connected to that skills development and service design are two issues that have come up so connectivity is important but to make sure that connectivity is meaningful we have to make sure there are other issues in terms of skills and service design. So those have come up, and we’re now working on the issue of an equality index. It’s a tough issue right now because there are seven different areas that we are actually exploring in terms of how the meaningful connectivity serves us in education, in health care, in employment, in CMSM issues, the cottage micro-enterprise issues, and the issues of public service delivery. So those are about seven areas that we’re looking at. And we hope that this equality index will give us a much deeper understanding, both with data, quantitative and qualitative data, to give us a deeper understanding of the UMC issues and also take us forward. And I really like the issue of the strategic insight, looking at strategic insight, what will be needed in the next five years or even ten years when we’ll have not just humans exchanging data amongst each other through systems and social media, but we’ll have a lot of devices and perhaps even robots working in different fields, in farms, in factories, perhaps even offices, also sharing data as well. So that’s the future that we will be looking at maybe ten years down the road, maybe even five years down the road. So looking at that future, painting the pictures of alternative futures with strategic insight, I think could be really valuable as well. So I really appreciate that issue being brought up in today’s discussion. It’s a very rich discussion to all participants. Thank you. My gratitude. And Dennis, thank you for the moderation. Really useful. Thank you.

Moderator :
Thank you, Anir. I’ll turn to ITU now.

Martin Shepherd:
I hope ITU also thinks the same way. Yes, we’re out of time, so I only want to say that the final remarks and the whole thing. I hope you have enjoyed the discussion and here’s a special parting by Martin , and he’s unstoppable when we are having one final session. I can only an echo them and that’s what we’re doing with a project promoting measuring universal and meaningful connectivity is spot-on and important and timely so we will work very hard and continue the product, thank you.

Moderator :
Thank you very much, Mr. Martin, and thank you to all the people who have joined us in this and there’s so much insight. I think the discussion will and has to continue. With this, we came to the end of our session. I hope it was useful for all and I hope it will be watched online with others in the future. And thank you to everyone. Thank you very much, and now I’m going to say goodbye. And thank you to the ICT and the AI team for organizing this event . Thank you very much. Bye-bye.

Alexandre Barbosa

Speech speed

217 words per minute

Speech length

2463 words

Speech time

680 secs

Agne Vaiciukeviciute

Speech speed

137 words per minute

Speech length

1453 words

Speech time

635 secs

Anir Chowdhury

Speech speed

172 words per minute

Speech length

2163 words

Speech time

755 secs

Audience

Speech speed

170 words per minute

Speech length

1170 words

Speech time

414 secs

Dr Cosmas Zavazava via Video 1

Speech speed

143 words per minute

Speech length

219 words

Speech time

92 secs

Martin Shepherd

Speech speed

163 words per minute

Speech length

2277 words

Speech time

839 secs

Moderator

Speech speed

154 words per minute

Speech length

1891 words

Speech time

736 secs

Peter Mariรซn

Speech speed

151 words per minute

Speech length

2071 words

Speech time

825 secs

Video 2

Speech speed

117 words per minute

Speech length

241 words

Speech time

123 secs

AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The expanded summary examines the impact of artificial intelligence (AI) on global security from various perspectives. One viewpoint raises concerns about the potential for AI to make the world more insecure, particularly in the context of warfare. This perspective highlights the evolution of the massive retaliation strategy, which now considers preemptive strikes due to the capacities of AI. The comparison of AI capacities on the battlefield may favor preemptive actions. Overall, the sentiment towards the effect of AI on world security is negative.

Furthermore, the development of deep learning in AI has raised worries about the easier generation of bioweapons, leading to concerns about biological warfare. With AI and deep learning, the process of generating bioweapons has become more accessible, posing a significant threat. This argument emphasizes the need to ensure biosecurity and peace. The sentiment surrounding this issue is also negative.

In addition to the concerns about AI in warfare and biological warfare, ethical considerations play a crucial role in the development and deployment of autonomous weapon systems. It is recognized that there is a need for ethical principles to guide the use of AI in armed conflicts. The sentiment regarding this perspective is neutral, but it highlights the importance of addressing ethical issues in this domain.

On the other hand, AI can potentially be used to reduce collateral damage and civilian casualties in conflict situations. This observation suggests a potential positive impact of AI on global security, as it can aid in minimizing harm during armed conflicts. The sentiment towards this notion is also neutral.

In conclusion, the analysis reveals mixed perspectives on the impact of AI on global security. While there are concerns regarding its potential to make the world more insecure, particularly in warfare and biological warfare, there is also recognition of the potential benefits of AI in reducing collateral damage and civilian casualties. It is crucial to ensure that ethical principles are followed in the development and deployment of AI in armed conflict situations. Additionally, the maintenance of biosecurity and peace is of utmost importance. These factors should be considered to navigate the complex landscape of AI and global security.

Fernando Giancotti

A recent research study conducted on the ethical use of artificial intelligence (AI) in Italian defence highlights the importance of establishing clear guidelines for its deployment in warfare. The study emphasises that commanders require explicit instructions to ensure the ethical and effective use of AI tools.

Ethical concerns in the implementation of AI in defence are rooted in the inherent accountability that comes with the monopoly on violence held by defence forces. Commanders worry that failure to strike the right balance between value criteria and effectiveness could put them at a disadvantage in combat. Additionally, they express concerns about the opposition’s adherence to the same ethical principles, further complicating the ethical landscape of military AI usage.

To address these ethical concerns and ensure responsible deployment of AI in warfare, the study argues for the development of a comprehensive ethical framework on a global scale. It suggests that the United Nations (UN) should take the lead in spearheading a multi-stakeholder approach to establishing this framework. Currently, different nations have their own frameworks for the ethical use of AI in defence, but the study highlights the need for a unified approach to tackle ethical challenges at an international level.

However, the study acknowledges the complexity and contradictions involved in the process of addressing ethical issues related to military AI usage. It notes that reaching a mutually agreed-upon, perfect ethical framework may be uncertain. Despite this, it stresses the necessity of pushing for compliance through intergovernmental processes, although the prioritisation of national interests by countries further complicates the establishment of universally agreed policies.

The study brings attention to the potential consequences of the mass abuse of AI, highlighting the delicate balance between stabilising and destabilising the world. It recognises that AI has the capacity to bring augmented cognition, which can help prevent strategic mistakes and improve decision-making in warfare. For example, historical wars have often been the result of strategic miscalculations, and the deployment of AI can help mitigate such errors.

While different nations have developed ethical principles related to AI use, the study points out the lack of a more general framework for AI ethics. It highlights that the principles can vary across countries, including the UK, USA, Canada, Australia, and NATO. Therefore, there is a need for a broader ethical framework that can guide the responsible use of AI technology.

The study cautions against completely relinquishing the final decision-making power to AI systems. It emphasises the importance of human oversight and responsibility, asserting that the ultimate decision for actions should not be handed over to machines.

Furthermore, the study highlights the issue of collateral damage in current defence systems and notes that specific processes and procedures are in place to evaluate compliance and authorise engagement. It mentions the use of drones for observation to minimise the risk of unintended harm before any decision to engage is made.

In conclusion, the research on ethical AI in Italian defence underscores the need for clear guidelines and comprehensive ethical frameworks to ensure the responsible and effective use of AI in warfare. It emphasises the importance of international cooperation, spearheaded by the UN, to address ethical challenges related to military AI usage. The study acknowledges the complexities and contradictions involved in this process and stresses the significance of augmenting human decision-making with AI capabilities while maintaining human control.

Paula Gurtler

The discussion surrounding the role of Artificial Intelligence (AI) in the military extends beyond legal autonomous weapon systems. It includes a broader conversation about the importance of explainable and responsible AI. One key argument is the need for ethical principles to be established at an international level. This suggests that ethical considerations should not be limited to individual countries but should be collectively agreed upon to ensure responsible AI usage.

Another significant aspect often overlooked when focusing solely on legal regulations is the impact of AI on gender and racial biases. By disregarding these factors, we fail to address the potential biases embedded within AI algorithms. Therefore, it is crucial to consider the wider implications of AI and its contribution to societal biases, ensuring fairness and equality.

Geopolitics and power dynamics further complicate the utilization of AI in the military. With nations vying for supremacy, AI becomes entangled in strategic calculations and considerations. The use of AI in military operations can potentially affect global power balances and lead to unintended consequences. This highlights the intricate relationship between AI, politics, and international relations, which must be navigated with care.

Although various ethical guidelines already exist for AI deployment, one question arises: do we require separate guidelines specifically designed for the military? The military context often presents unique challenges and ethical dilemmas, differing from other domains where AI is utilized. Therefore, there is a debate over whether existing guidelines adequately address the ethical considerations surrounding AI in military applications or if specific guidelines tailored to the military context are necessary.

In conclusion, the debate regarding AI in the military extends beyond the legality of autonomous weapon systems. It encompasses discussions about explainable and responsible AI, the need for international ethical principles, the examination of gender and racial biases, the influence of geopolitics, and the necessity of specific ethical guidelines for military applications. These considerations highlight the complex nature of implementing AI in the military and emphasize the importance of thoughtful and deliberate decision-making.

Rosanna Fanni

During the discussion, the speakers explored the potential dual use of artificial intelligence (AI) in both civilian and military applications. They acknowledged that AI systems originally developed for civilian purposes could also have valuable uses in defense. The availability of data, machine learning techniques, and coding assistance makes it feasible for AI to be applied in both contexts.

A major concern raised during the discussion was the lack of ethical guidelines and regulations in the defense realm. While there are numerous ethical guidelines, regulations, and laws in place for the civilian use of AI, the defense sector lacks similar principles. This highlights a disconnect between the development and use of AI in civilian and defense contexts. Developing ethical guidelines and regulations specific to AI in defense applications is crucial to ensure responsible and accountable use.

The European Union’s approach to AI, particularly the exclusion of defense applications from the AI Act, was criticized. The AI Act employs a risk-based approach, yet its exclusion of defense applications contradicts this approach. This omission raises questions regarding the consistency and fairness of the regulatory framework. The speakers argued that defense applications should not be overlooked and should be subject to appropriate regulations and guidelines.

Another important issue discussed was the need for international institutions to take on more responsibility in terms of pandemic preparedness. The COVID-19 pandemic has demonstrated the necessity of being prepared to tackle challenges and risks arising from the rapid spread of bio-technology. The speakers emphasized that institutions should be better prepared to ensure the protection of public health and well-being. Moreover, they stressed that equal distribution of resources is crucial to prevent global South nations from being left behind in terms of bio-risk preparedness. The speakers highlighted the importance of avoiding a race between countries in preparedness and ensuring that global South countries, which often lack resources, are provided with the necessary support.

In conclusion, the discussion revolved around the need to address the potential dual use of AI, establish ethical guidelines and regulations for defense applications, critique the exclusion of defense applications in the European Union’s AI Act, and emphasize the role of international institutions in pandemic preparedness and equal distribution of resources. These insights shed light on the ethical and regulatory challenges associated with AI, as well as the importance of global collaboration in addressing emerging risks.

Pete Furlong

The discussion revolves around the impact of artificial intelligence (AI) and emerging technologies on warfare. It is argued that AI and other technologies can be leveraged in conflicts, accelerating the pace of war. These dual-use technologies are not specifically designed for warfare but can still be used in military operations. For example, AI systems that were not initially intended for the battlefield can be repurposed for military use.

The military use of AI and other technologies has the potential to significantly escalate the pace of war. The intent is to accelerate the speed and effectiveness of military operations. However, this raises concerns about the consequences of such escalated conflicts.

One of the challenges in implementing AI principles is the broad interpretation of these principles, as different countries may interpret them differently. This poses challenges in creating unified approaches to AI regulations and ethical considerations. While broad AI principles can address a variety of applications, there is a need for more targeted principles that specifically address the issues related to warfare and the military use of AI.

Discussions about the use of AI and emerging technologies in warfare are increasing in various summits and conferences. The UK Summit for AI Safety is an example of such discussions. Additionally, the concern about the use of biological weapons is growing, as it is noted that they only need to work once, unlike drugs that need to work consistently. This raises significant ethical and safety concerns.

AI’s capabilities are dependent on the strength of sensors. The cognition of AI is only as good as its sensing abilities. Therefore, the value and effectiveness of AI in warfare depend on the quality and capabilities of the sensors used.

One potential use of AI in warfare is to better target strikes and reduce the likelihood of civilian casualties. The aim is to enhance precision and accuracy in military operations to minimize collateral damage. However, the increased ability to conduct targeted strikes might also lead to an increase in the frequency of such actions.

One of the main concerns regarding the use of AI in warfare is the lack of concrete ethical principles for autonomous weapons. The RE-AIM Summit aims to establish such principles; however, there remains a gap in concrete ethical guidelines. The UN Convention on Certain Conventional Weapons has also been unsuccessful in effectively addressing this issue.

In conclusion, the discussions surrounding AI and emerging technologies in warfare highlight the potential benefits and concerns associated with their use. While these technologies can be leveraged to enhance military capabilities, there are ethical, safety, and interpretational challenges that need to be addressed. Targeted and specific principles related to the military use of AI are necessary, and conferences and summits play a crucial role in driving these discussions forward. The impact of AI on targeting precision and civilian protection is significant, but it also raises concerns about the escalation of conflicts. Ultimately, finding a balance between innovation, ethics, and regulation is essential to harness the potential of AI in warfare while minimizing risks.

Shimona Mohan

The discussions highlight the significance of ethical and responsible AI methodologies in military applications. Countries such as the United States, United Kingdom, and France have already implemented these strategies within their military architectures. However, India has chosen not to sign the global call for Responsible AI, prioritising national security over international security mechanisms and regulations.

The absence of national policy prioritisation of military AI poses challenges in forming intergovernmental actions and collaborations. Without a clear policy framework, it becomes difficult for countries to establish unified approaches in addressing the ethical and responsible deployment of AI in the military domain.

Gender and racial biases in military AI are also raised as important areas of concern. Studies have shown significant biases in AI systems, with a Stanford study revealing that 44% of AI systems exhibited gender biases, and 26% exhibited both gender and racial biases. Another study conducted by the MIT Media Lab found that facial recognition software had difficulty recognising darker female faces 34% of the time. Such biases undermine the fairness and inclusivity of AI systems and can have serious implications in military operations.

The balance between automation and ethics in military AI is emphasised as a crucial consideration. While performance in military operations is vital, it is equally important to incorporate ethical considerations into AI systems. The idea is to ensure that weapon systems maintain their level of performance while also incorporating ethical, responsible, and explainable AI systems.

The use of civilian AI systems in conflict spaces is identified as a noteworthy observation. Dual-use technologies like facial recognition systems have been employed in the Russia-Ukraine conflict, where soldiers were identified through these systems. This highlights the potential overlap between civilian and military AI applications and the need for effective regulations and ethical considerations in both domains.

Additionally, the potential of AI in contributing to bio-safety and bio-security is mentioned. A documentary on Netflix titled “Unknown Killer Robots” showcased the risk potential of AI in the generation of poisons and biotoxins. However, with the right policies and regulations in place, researchers and policymakers remain optimistic about preventing bio-security risks through responsible and ethical AI practices.

In conclusion, ethical and responsible AI methodologies are crucial in military applications. The implementation of these strategies by countries like the US, UK, and France demonstrates the growing recognition of the importance of ethical considerations in AI deployment. However, the absence of national policy prioritisation and India’s refusal to sign the global call for Responsible AI highlight the complex challenges in achieving a global consensus on ethical AI practices in the military domain. Addressing gender and racial biases, finding a balance between automation and ethics, and regulating the use of civilian AI systems in conflict spaces are key areas that require attention. Ultimately, the responsible and ethical use of AI in military contexts is essential for ensuring transparency, fairness, and safety in military operations.

Session transcript

Rosanna Fanni:
which is here at least in Japanese time, quite late in the afternoon on the third day of the IGF. My name is Rosanna Fanny. I have been actually working for the Center for European Policy Studies in short steps until last week and this session is a very special one because it’s a topic that I’m personally very passionate about that I’ve been working on for quite some time now and it is also a special session because I think the topic that we are going to address today is normally not really at the center of the IGF discussions, which is of course the use of AI and emerging technologies in the broader defense context. Why is that topic relevant for the IGF? Well let’s just consider for a moment that almost all AI systems that we are currently speaking about, the models that are currently being developed, they are of course used for civilian purposes but at the same time they could also be used for defense purposes. So that means they are dual use and as we also know today literally anyone has access to data and can easily set up machine learning techniques or algorithmic models and can use coding assistance such as JetGPT and so this means that basically almost all the technology, the computing power, programming languages, code, encryption information, big data algorithms and so on has become dual use and of course the military, not only the civilian sector but also the military has high stakes in understanding and using these technological tools to their own advantage. Of course we know that these developments are not really new and things started with the DARPA and which some of you may are familiar with, the US defense in-house R&D think-tank and DARPA was already at the time central to developing the internet software and also AI that we all use today. But as we now see with the conflict in Ukraine, AI is already in full-scale use by defense actors and also has the big potential to change power dynamics considerably and our panelists will speak to that. While we have seen numerous developments, the use of AI already in those contexts we see that there’s quite a disconnect to the civilian developments in AI which include a large number of ethical principles, ethical guidelines, regulation, soft law, hard law and so on. However we don’t really see that happening yet in the defense realm which for me is quite concerning because the stakes and the risks in the defense context may even be higher than in civilian ones. And this is also I think a great example for or a surprising example when we look at the current European Union approach to AI. So the much applauded AI Act which is risk-based and it actually excludes virtually all AI applications that are used in a defense context. So AI is completely excluded from the scope of the AI Act which is funny because it’s called risk-based approach, right? So this just as a means of introduction and we have a lot of urgent questions and very little answers so far so I hope that our panelists will enlighten us. I will introduce the panelists in the order that they are speaking and before they are speaking so I will now introduce the first speaker and I also first want to introduce Paula, my now former colleague who’s based in Brussels and joins us today as our online moderator and we have foreseen half an hour or yeah maybe the second half of the session so to say where we want to hear from you so answer any questions that you have so with the panelists obviously so get your questions ready and I think now it’s time to to dive right into the discussion and to do that I will introduce the first speaker who’s joining us here in person. I will introduce Fernando Giancorti and he’s a lieutenant general of the Italian Air Force, retired now, and he’s also a former president of the Centro Alti Studi per la Difesa which is in short the Italian Defense University. Fernando the floor is yours.

Fernando Giancotti:
Thank you very much Rosanna. I’m very honored to be here to share some thoughts about this topic which in the great debate about ethical use of emerging and disruptive technology kinds of lags behind. We have heard so many organizations involved in so much and rightly so in ensuring ethical behaviors in many of the domains of our lives and we don’t see as many taking care of one of the most dangerous and relevant threats to our security and to the lives of our fellow people. So this panel which I think is the only one here about the topic is meant to give a let’s say a call on this. Wars are on the rise unfortunately and conflicts. I don’t need to expand on that I think we have enough from media and in the field many a lot of violence is going on and while this is in very forefront of attention not so the implication of what is being already used on the fields. Yeah closer yeah so I argue that this is important both for ethical and functional reasons according to a research recently published about ethical AI in the Italian defense a case study commanders need clear guidance about this new tools because first the ethical awareness is ingrained both in education and in the system which implies also swift penalties if you fail and this is due to the in democracies to the assignment of the monopoly of violence to the armed forces. So ethical awareness is high and also of very practical grounds accountability issues commanders are afraid that without clear guidelines they will have to decide and then they will held accountable for that and furthermore and this is another major point what came out from the research which by the way was authored by the moderator and co-authored by me you can find it on LinkedIn there is what I call the bad guy dilemma which is a very functional problem about ethics in AI and EDT in general applied to warfare which is if we do not carefully balance value criteria with effectiveness and so we don’t do a good job in finding that balance and the bad guys do not let’s say follow the same principles that we will be in disadvantage this is a another worry that came out from the research so now let’s go very quickly in a few words through what’s going on on the battlefield about this in the industry and in the policy realm on the battlefield we can see three main timelines before Ukraine during Ukraine and we can imagine what’s going to happen after Ukraine given many indicators before Ukraine AI was not much used in warfare at all and but for experiments and a few isolated cases but with the breakout of the Ukraine war things have changed massively which means that there has been a strive to employ all the means available there is no evidence there is a very recent report a few weeks ago from the Center for Naval Analysis which shows that there is no evidence of extensive use of AI in the Ukraine war but except for decision-making support which is of course critical now or still there are several systems that can employ AI and maybe they have in cases and there is for sure a big investment in trying to increase the capabilities of artificial intelligence in warfare what we can expect given also the multi the big huge programs that are being developed in which already by design include artificial intelligence there will be a huge increase in that and the industry as a matter of fact also because it’s a dual use industry largely is working much on this and we cannot expand on the systems that are being developed but really there would be a major change in the nature of warfare due to AI so this is briefly what happens now in the battlefield what happens in the industry with government’s commissions and now what happens in the policy realm the policy realm the EU does not regulate defense because it’s outside the treaties but Europe the EU is doing many things that are outside the treaties regarding defense especially for the Ukraine war so it’s kind of a fig leaf let’s say I think this point the UN as a major international stakeholder has focused on highly polarized lethal autonomous weapons initiative which doesn’t move forward but there is no comprehensive approach to tackle with more general framework single nations have developed ethical frameworks for AI in defense but by definitions and remember the bad guy dilemma this kind of frameworks are relevant if they can be generalized at the largest possible level so we should I think according also to the multi-stakeholder approach that is typical for example of this forum have the UN join and lead the way for a comprehensive ethical framework kind of a digital use in bello in a multi stakeholder approach the UN it was born out of a terrible war its core business is to prevent and mitigate conflicts and and as there are some good news as Peggy Hicks said of the office for the High Commissioner for Human Rights on Monday she said we don’t need to invent from scratch an approach to an approach to digital rights because we have decades of experience in the human rights framework application I can say that we don’t need to invent from scratch a way to implement and operationalize ethical principles in in operations because we have decades long approach in application of international humanitarian law with procedures and structures dedicated to that the bad news is that we don’t have those general principles to operationalize at the strategic operational and tactical level before coming here in my previous job I was I’ve been the president for the Center for Higher Studies which is our National Defense University but also the operation before that the operational commander of the efforts and I can guarantee you that every operation has a very tight rigorous approach to for compliance to international humanitarian law which goes to specific rules for the soldier on the battlefield rules of engagements and things like that so my let’s say thought that can be of course discussed that’s very simplistic put in this way and maybe in the question and answer we can expand but it’s that we should really get a general effort because I think there is evidence that these ugly things that are wars and conflicts are not going away we better try to do our best to mitigate them thank you

Rosanna Fanni:
thank you thank you Fernando for your contributions and I guess we will I have already some questions prepared we will come back to you when we speak about the the question and answer session so yes exactly I will hand over to the next speaker who’s also here with us in person and Pete Furlong and he’s a senior policy analyst at the Internet Policy Unit at the Tony Blair Institute I am think tank and yeah the floor is yours Pete sure yeah and thank

Pete Furlong:
you for having me here I think you know it’s important when we talk about these issues that you know we kind of ground it in you know specific technologies and think about what technologies we’re talking about I think you know like Fernando mentioned that you know we can often get caught in these conversations about lethal autonomous weapons that can be you know pretty fraught but you know there’s a lot of other technologies that are important to talk about and you know especially when you think about the emerging and disruptive technology beyond just AI and I think you know when you look at the Ukraine war like things like satellite internet are a very good example of that but also kind of the broader use of drones in the warfare and you know I think it’s important to realize that extends beyond just traditional military drones but also through to like consumer commercial and hobbyist drones as well and I think that you know when we talk about things like that it’s important to realize that you know these systems weren’t designed for the battlefield and I think that’s often the case for a lot of dual-use AI systems as well and they weren’t designed you know with the maybe the reliability and the performance expectations that you know a war you know brings and you know the reality is that when you’re fighting for your life you’re not necessarily thinking about these issues and so it’s important that you know in these forums that we start thinking about and talking about these issues because you know this technology has a really transformative effect on these conflicts and I think you know the use of consumer drones in Ukraine is a great example of you know an area where Ukraine’s been able to leverage you know you know US and Turkish sophisticated attack drones but also you know simple like custom-built even like DJI which is like a you know consumer commercial drone provider drones from from different companies as well and I think that you know you’re really blurring the lines between these different types of technologies which have different governance mechanisms and different rules in place so I think that’s important for us to think about and I think the one other thing that I would bring up is that you know again moving beyond just the discussion about AI and weaponry but also by the military more broadly you really have the potential to escalate the pace of war significantly and I think that’s something for us to really consider when we talk about things like you know ensuring there’s space for diplomacy ensuring there’s space for for other interventions as well and and again really the intent is to accelerate the pace of war and we need to to really think about the consequences of that as well so thank you

Rosanna Fanni:
thank you thank you very much and yeah also good that you came back to this aspect that I already mentioned in the beginning that the war so to say are now almost as you say like a community you know because everyone can build a drone can develop a model and and kind of be an own actor almost and and that of course has manifold implications yeah thanks a lot and when I hand it over to the third speaker who joins us online from New Delhi and I hope we are also able to to see her on screen soon and introduce in the meantime and Shimona Mohan is a junior fellow at the Center for Security Strategy and Technology at the Observer Research Foundation and also think tank and based in India and New Delhi. Shimona the floor is yours. Thank you Rosanna I just wanted to check if you can see and hear me well before I start off. Excellent we can see and hear you.

Shimona Mohan:
Fantastic okay thank you so much for for having me on this panel. It’s the perpetual blessing and curse of having talented panelists that my job is simultaneously easier and harder but I hope the the issues that I will be speaking about will be of value as well. So since Fernando and Pete sort of spoke about why ethics are important already I will just probably take the conversation further and into the domain of two separate methodologies around AI in defense applications that we have seen being employed recently, and how they’ve sort of come about in the defense space. So the first one of which, which I’d like to sort of give a characterization around is explainable AI. And while there is no consolidated definition or characterization of what explainable AI is, it’s usually understood as computing models or best practices or a mix of both technical and non technical issue areas, which are used to make the black box of the AI system a little bit more transparent, so that you can sort of delve in and see if there are any issues or if there are any blocks that you’re facing with your AI systems in, in both civilian and military applications, you can sort of go in and fix them. So that’s definitely something that we’re seeing coming up a lot. And as Rosanna mentioned earlier, DARPA was actually at the forefront of this research a few years ago. And now we’re seeing a lot more players sort of come into this and sort of adopt XAI systems, or at least put in resources into the research and application side of them. So for example, Sweden, the US and the UK have already started research activities around using XAI in their military AI architectures. And then we also have a lot of civilian applications, which are being explored by the EU, as well as market standards by industry leaders like, like Google, IBM, Microsoft, and numerous other smaller entities which have much more niche sectoral applications around this. So that’s one. Another thing that a lot of us are sort of noticing in the defense AI space now is something called Responsible AI. And Responsible AI is sort of understood as this broad based umbrella terminology that, that encompasses within it stuff like trust, trustworthy AI, reliable AI, even explainable AI to a degree. And it’s mostly just the practice of sort of designing and development and also deploying AI, which sort of impacts society in a fair manner. So countries like the US, the UK, Switzerland, France, Australia, and a number of countries under NATO have also sort of started to talk about and implement ethical and responsible AI strategies within their military architectures. And for those who work around this area, they may also be aware about the Responsible AI in the military summit in the Netherlands, which was convened earlier this year, as sort of a global call to ensure that Responsible AI is part of military development strategies for about 80 countries that were there at this at this particular meeting. But the interesting thing, and this is where I’d like to bring in a geopolitical angle to this, is also the fact that out of those 80 governments that were present at this meeting, only about 60 of them actually signed this global call. And it’s interesting to note that the country where I come from, India was one of the 20 that did not sign this call. So the analysis for this ranges from considerations around national security, and a prioritization of national security over international security mechanisms, which is something that countries like India have pursued before as well. So India is actually one of the four or five countries which have not signed the nuclear non-proliferation treaty either. And that was on the same sort of principles of ensuring its national security over aligning itself with international security rules and regulations, and softer laws. So that’s an interesting dilemma here. And another dilemma that I’d like to sort of put my finger into is something that Fernando mentioned earlier, which is the bad guy dilemma. And of course, there’s no clear answers to sort of solve this bad guy dilemma. But something that’s been brought about by the responsible AI in the military domains, discourse around military AI, is the fact that AI based weapon systems like lethal autonomous weapons and other defense aids, which have not been screened for responsible AI considerations, carry a lot of tangible risks of exhibiting biased or error prone information processing for the operational environment in which they are deployed. So systems which actually don’t have responsible AI or ethical AI frameworks around them also pose unintentional exclusive harms, not only towards adversaries against which these military AI systems are employed, but also possibly for the entity employing them itself, which makes their use unnecessarily high risk, despite their other benefits which they give to the employing entity. And while we’re on the subject of ethics and AI, I’d also like to just spotlight another sort of aspect of this ethics debate, which is gender and racial biases in military AI. So we already know that there’s a ton of biases that AI brings to the fore, not only in civilian applications, but also in defense applications. And something that’s given a little bit more, a little bit less emphasis on is gender and racial biases. So gender is sort of seen as a soft security issue in policy considerations, as opposed to hard security deliberations, which are given a lot more focus. And the issue of gender in tech, whether it’s in terms of female workforce participation, is also characterized as sort of an ethical concern rather than a core tech one. So this characterization of gender as an add on, essentially makes it sort of a non issue in security and tech agendas. And if at all it is present, it’s it’s usually put down as a checkbox to performatively satisfy policy or compliance related compulsions. But we’ve seen that gender and race biases in AI systems can have a lot of a lot of devastating effects on on the applications where they are employed. So there was actually a Stanford study a few years ago, on publicly available information on 133 biased AI systems. And this was across different sectors. So it’s not just limited to military, but across the ambit of dual use AI systems. And about 44% of these actually exhibited gender biases, amongst which 26 included both gender and racial biases. So similar results have also been obtained by the MIT Media Lab, which conducted the gender shade study for AI biases, where we’re seeing that the the softwares, the facial recognition softwares, which are which are popularly employed in a lot of places now, recognize, say, for example, white male faces quite accurately, but they don’t recognize darker female faces up to 34% of the time, which means that if your particular AI system that you employ in your military architecture, has this kind of biased facial recognition system, 34% of the time, when it looks at a human, it doesn’t recognize her as a human at all, which is, of course, a huge ethical issue, as well as an operational issue. So going back to the argument, given by Fernando, that ethics are not only just, just just a soft, soft issue, they also have a lot of operational risks attached to them. And my last point here would then be also about how we are seeing these sort of blanks emerge in how military AI is, is is developing in terms of both gender and races. So the first and these blanks are sort of threefold. So the first blank here would be the technologically blank itself, which means when you have and are developing these AI systems that you have skewed data sets, or you have uncorrected biased algorithms, which are sort of producing these biases in the first place. The second blank then would be your legal systems, your weapons review processes, which don’t have gender reviews, gender sensitive reviews, or, or race specific reviews, or any other particular aspect of your military system, which could be biased. And then the third set of blanks would be a normative blanks, which would be in terms of a lack of policy discourse around AI biases in military systems, and how they affect the populations which they affect most. So the idea for us now is to sort of take forward these conversations about ethics, about biases, about geopolitical specificities in military AI policy conversations, and sort of put them wherever we can so that these don’t get left behind. And we are not sort of only looking at military systems as killing machines. And not as systems that need to be regulated according to a certain set of rules and regulations. Thank you so much. And I look forward to all the questions.

Rosanna Fanni:
Thank you. Thank you, Simona. That was also super insightful. Also, thanks a lot for raising the issue of gender and race, which I think is already a big issue in the civilian context. But again, this is replicated in a defense context, and definitely not sufficient attention at the moment, at least as paid to this issue. Okay, so that concludes the first round of the interventions. Thanks a lot to all the speakers. I will hand it now over to our online moderator, Paula, to give us a short summary, so to say, of the points that we’ve just heard. And then maybe also already start with a question and answer session. So I’m taking some online questions first. And I also, I will invite you, the three of you, once you answer, you can also refer to the points that you made, as we don’t have a, so to say, a circle of points or reactions from your side. But feel free to include them in the question and answer session. Okay, over to you, Paula.

Paula Gurtler:
Yes, thank you. Greetings from Brussels, where it is still in the morning. So thank you for an interesting start into my day, so to speak. For me, there are so many interesting points that you’ve raised that it’s really difficult to just settle on three takeaways. For me, the first one would be, though, that we need ethical principles at international levels, so that we need to find some kind of agreement so that we can move forward with ensuring more ethical practices in military AI applications. That also relate to accountability issues that were raised by the commanders in the Italian military defence. The second one is, for me, the main takeaway probably of the entire session is that the conversation is much bigger than loss. And by just focusing on legal autonomous weapon systems, we really miss out on much of the conversation on explainable AI, responsible AI, and also what you mentioned, Shimona, in the last intervention, that we really miss out on gender and racial biases if we just focus on loss and these extreme use cases. So I think, really, that the conversation is bigger than loss is one of the main takeaways. And another one that complicates the whole use of AI in military is, of course, geopolitics and the power plays that are pitting stakeholders against each other. So I think this is already so many interesting points. And I would love to give our online audience a chance to raise their questions. Please feel free to raise your hands, type in the chat if you have any questions. But if there aren’t any, I have my own questions, which I’m really excited to ask. So I will just start off with my own question and then come back to the online participants. Please don’t hesitate to be in touch via the chat. So what I’m wondering is, on the ethical principles that we need for AI and military use, I’m wondering, do we need different ones than for those that we already have? We know how many ethics guidelines are floating around and about. And I’m wondering, do we need different ones for use of AI in military contexts? I also heard bias plays a role, responsible AI, explainable AI. Do we need ethical principles that look different to those that we have right now to cover the military domain? So thank you so much. I’m really looking forward to the continuation of the discussion.

Rosanna Fanni:
Okay. I don’t know who wants to address this question. And then we also, of course, go to an in-person round of questions. You will not be forgotten, but maybe we can first address this one. I don’t know who wants to go first. Okay. We do first this round, and then we’ll have another round of questions with the in-person one. Yeah. Go.

Pete Furlong:
Yeah. I mean, I think it’s a great question. And I think, you know, in an ideal world, you know, these principles would be the same. And I think, you know, that would be great. But I also just think there’s an element of maybe not necessarily do we need different principles, but do we need maybe more targeted principles that address some of the issues, you know, that we’re seeing more specifically? Because I think, you know, again, most of these AI principles are very useful and important, but they’re, you know, intentionally broad, because they’re meant for a wide variety of applications. And I think that, you know, that poses a challenge when we talk about how do we implement them? And, you know, you can end up in a situation where different countries interpret these things very differently. And I think that’s maybe the risk in having, you know, pretty broad, you know, interpretation here.

Rosanna Fanni:
Shimona and then Fernando, you also want to say something? Yeah, maybe we have Shimona first and then you. We will have Shimona first and then you can go. Yeah. Please.

Shimona Mohan:
Thank you, Rosanna. Just to add on to Pete’s already very substantive point, I would also like to highlight the fact that in the absence of national policy prioritization of military AI, it’s very hard for countries to actually go ahead and form intergovernmental actions around military AI. So while we speak of ethical principles, since AI itself is not really a tangible entity that you can control via borders, the most effective sort of ethical principles might only emerge from intergovernmental processes around this. But to get to that step where we are discussing substantive ethical principles in substantive intergovernmental processes, I think the first step is to have a good national AI policy for all the countries who are currently developing military AI systems or any other systems around AI which might have military offshoots. So that would be sort of my two cents on this.

Fernando Giancotti:
Very quickly, I think that the quality of the process does not change from what has always happened. Also, for all the other ethical issues that have been raised and tackled, for example, after World War II with the constitution of the UN and then the implementation of the agreed guidelines, there have always been a very dialectic and contradictory process, and we will never get a perfect framework everybody is going to comply with. But striving for the best possible balance, I mean, I think it has no alternative because the alternative is to let things go, you know, possibly in the worst possible way. So we have no certainty, according also to what we see for the other big agreements, agreements about the nuclear and also, you know, unconventional weapons and many other frameworks, and Simona mentioned exactly that some countries prefer the national interest in specific cases, and so this is going to happen. But this doesn’t mean that we shouldn’t strive to push forward the compliance as much as possible through the, as it has been said, the intergovernmental process and especially the organizations that are the responsibility to promote this.

Rosanna Fanni:
Thank you. Fantastic. We’re already in the midst of the debate. We will now take the in-person questions, maybe also one after each other, and then I also hear from Paula that we have another online question, so we will take that afterwards. But first, if you would like to ask questions, also maybe briefly introduce your name, your affiliation. I see you don’t have a microphone. Maybe you would like to use this one. It’s a bit far away, but if you have this one already.

Audience:
Yes, my name is Julius Endert from DW Academy from Germany, also from Deutsche Welle Broadcaster, so I would like to ask Fernando, from your perspective as a military leader, so does AI make our world safer or not? Because we are coming from the massive retaliation strategy from 25 years ago, and if I see now that we are living in a situation where we may think from a perspective of states or NATO that a pre-emptive strike is better when the other side has massive AI capacities, and also in tactics, when we compare our own capacities on the battlefield, then we also might think, okay, let’s go for a pre-emptive strike, and so that in the end would mean that our world will be more unsecure than it was before because of AI. So, what do you think?

Rosanna Fanni:
Thank you. We’ll just take the other question first, and then we’ll answer together. If you would like to ask a question also now. I see you have a microphone accident. You have to switch it on.

Audience:
So, thank you. I’m Rafael Delis. I’m a scientist in infection biology, and I am concerned about an invisible battlefield, that is biological warfare and non-state actors. Now with AI and deep learning, generating bioweapons has never been easier. So, I’d like to use this forum to ask what should we do to ensure biosecurity and peace.

Rosanna Fanni:
Thank you. Also, a very pressing question for sure, especially in international context. Over to the speakers for their replies. Maybe Fernando, you’d like to go first this time, and then I’ll let Shimona and Pete fill in.

Fernando Giancotti:
The question is very interesting. By the way, this paper I just mentioned, the one of the CNA, talks about this also, whether mass abuse of AI will, let’s say, make things more stable or more unstable. Now, there are good grounds to say that could be either way, and which is like things have always been. It could have been the other way, one way or another. What I think, and I’m very interested in the augmented cognition that AI can bring, what I think that many strategic mistakes that led to wars, if we really get to an excellent degree of cognition, augmented cognition, could be avoided. For example, if you study wars, you see that most of the time, was a strategic miscalculation that led decision makers to start wars that for which they paid a very high price, much more than expected. Had they had lesser fog of war, most likely they would not have done that. The Ukraine case is a perfect case of that. So I think that if we can, and now we cannot, use AI for an actual quantum leap in strategic decision making, then this should be a stabilizing factor for most of the cases. There will be anyway cases, I think, in which this augmented cognition will prompt intervention. And so again, either or. But better to go toward augmented cognition are judging from the blood that has been shed for miscalculation so far.

Rosanna Fanni:
Thank you. Okay, Shimona, Pete, I don’t know who wants to also add something, maybe also to the second question. Shimona, you want to go next? Sure.

Shimona Mohan:
I can add just another point to Fernando’s already very well done answer, and I’ll take the second question. On the question of whether military makes a word, more unsecure or safer. I think all weapon systems are developed with the singular focus of giving yourself an edge over your adversary, as a result of which in like a systemic format, it definitely makes the world a lot less safe. But then we also have this idea of what kind of Cobra effect will come about from this. What kind of opposite effect can we see emerging from this? And I think Fernando highlights that very well when he says that this augmented factor might lead to a higher threshold of war, which might eventually then make it safer. But again, these are just optimistic viewpoints at this point, and it remains to be seen how this plays out in the global scenario. On the second question of bio-safety and security as well, it’s very correct to say that AI is something that will contribute a lot to this domain as well. And in fact, it’s already a risk factor that a lot of issue domains and experts are already aware of. So there’s this documentary on Netflix, it’s called Unknown Killer Robots. And it was chilling in the sense that it showcased a lot of these military application potentials, which we haven’t really explored a lot in the lethal autonomous weapons debate at the intergovernmental level. And one of these risk potential factors was how AI can be used to make a lot more poisons and biotoxins and generate them at an alarming speed, which we as humans at this point are not capable of. And this is even more exacerbated by generative AI applications now. So it’s very right to have the assumption that AI will lead to a lot more of these risk potentials around biosecurity also coming up. But at the same time, anything that is a genius for the wrong things can also be a genius for the good things. So let’s hope that while we have malicious actors or nefarious entities sort of taking over the biosecurity domain from the negative side, there are also scientists and policy researchers and normative actors working on the regulation side to help prevent that from happening, or at least having punitive measures in place before and when it happens. And that’s unfortunately the best I can say for now.

Pete Furlong:
Yeah, and just to add to what my colleagues have said sort of quickly here, I think on the biological weapons side of things, I think one of the concerns that I have is that when you talk about for these types of use cases, if I’m using a generative AI system to develop some sort of drug to help people, that needs to work every time. If I’m developing a biological pathogen for some sort of attack vector, it only needs to work once. And so I think there’s a gap in terms of capabilities that when we talk about trying to address at this stage is very important for us to recognize. And I think that it poses a significant challenge. The good thing I will say is I think that on this issue of like biological weapons is something that people are starting to talk about a little bit more. I know with like the UK Summit for AI Safety, that’s been one of the topics that is gonna be addressed at that. And then just actually to build on what Fernando said earlier, I think when we talk about this idea of improved cognition, I think one of the potential fears that I have with that is that cognition is only as good as your sensing. And so actually my background’s in robotics. And so one of the things in robotics that’s very challenging, right, is that you can have a very good robotics software system, but if your sensors aren’t strong enough and your sensors aren’t able to perceive the information, then that doesn’t really buy you anything. And so I think it’s important for us to consider that these AI software systems exist in a broader system and in a broader ecosystem, and it’s important to consider all those factors as well.

Rosanna Fanni:
Absolutely, thanks a lot. And if I maybe just abuse my moderator role a bit, and also add one tiny point about bio-risks, bio-ethics, bio-tech, so to say, is that I think with COVID, of course, we have seen a complete shift of mindset when we look at institutions, and I can just speak about the EU because that has been the focus of my study, but I think with a lesson learned, so to say, of the COVID pandemic, institutions have, I think, at least woken up and have seen that they need to be prepared much better to tackle those challenges, and those risks also emerging from the rapid spread and also the cross-fertilization between technology and bio. And as you may know also, the commission itself has established an entire new directorate general, so a new DG, which is called HERA, which just deals with pandemic preparedness, but not only pandemic preparedness, but also the future of indeed protecting civilian, yeah, civil people from those risks, also bio-risks, and I have friends that work also in this department, so it’s always very insightful to hear that actually institutions are already thinking about this issue, but I think still there needs to be done so much more, and I think especially also when you look at international institutions, much more foresight, I think, will need to happen, and foresight, as we know, is a tool. It’s not to foresee the future. It’s not to be a storyteller of what actually happens, but to be prepared and to know certain scenarios and to know certain risks, and I think there needs to be much more investment in research and development into foresight, into methodologies, into actually training also civil servants, capacity building, what is also mentioned here a lot in this context, so that eventually institutions themselves can be prepared, and hopefully also then the world as such, so that also especially global South nations are not left behind, because of course if you have more capacities to set up your institutions accordingly, then you will be better prepared, hopefully, but this should not mean that there should be, again, a race between global North and global South countries who arrives there first, and of course often global South countries do not have the appropriate resources to work on those topics, so I think it’s really important that especially international institutions, such as the United Nations, take over more responsibility in this point. Okay, now I talked a lot, as my moderator role is abused, I’ll hand it over to Paula for the online question, I hope the person is still there and also interested and following, so yeah, over to you, Paula.

Paula Gurtler:
Yes, I think I can confirm that the person who asked the question is still here and interested and engaged, because they asked a second question, and I would like to offer it to you, Lloyd, to actually take the floor yourself, if that is possible, otherwise I’m also happy to read your question aloud.

Rosanna Fanni:
So I think it should be possible if the technical department is just able to, I think the person can unmute herself, himself, and just ask the question out loud.

Audience:
Very good morning, all observed, thank you so much for the session, first and foremost, and it’s a great pleasure to be part of great conversations that would obviously be impacting the way the world is gonna be looking at things. So my first question is, oh, sorry, my name is Lloyd, and I’m actually calling in all the way from South Africa, so looking at obviously the great work that everybody’s doing on the platform, my first question would obviously then be more around what are the ethical considerations when developing and deploying autonomous weapon systems, and how do we strike a balance between human control and automation? How does the body of CSIJF look at that? Then should I just quickly ask the second question, sorry, Paula? Okay, awesome. And then the same question is, how can AI be leveraged to reduce civilian casualties and minimize collateral damage, and obviously armed conflicts, and what ethical principles should guide this use itself? If there’s any thought been put around that as well. So those are my two, well, I’ll call them three main questions from my side. Thank you.

Rosanna Fanni:
Thank you. Thank you, Lloyd, for asking the question and joining us all the way from South Africa. Greetings from Kyoto. I don’t know who wants to answer this question. Pete, do you wanna go first this time?

Pete Furlong:
Yeah, I mean, thanks a lot for some great questions. And maybe just to take your second question first. I think there’s been a lot of talk about trying to use AI to better target strikes and reduce the likelihood of civilian casualties. So that’s been kind of a way in which people have been talking about using AI to reduce the likelihood of those issues. But I mean, I think it’s worth also bringing up kind of the flip side of that, which is that if you can conduct more targeted strikes, we might see more strikes. And I think when you look at the use of drone strikes in the past 20 to 30 years, maybe that’s the reality. In terms of ethical principles being used for autonomous weapons, I think the RE-AIM summit was, its goal is to try to get to that. But for now, it’s more of a, just a call to action at this point. And I don’t think we necessarily have anything concrete. And the UN Convention on Certain Conventional Weapons has tried and somewhat failed to this point to address that as well.

Rosanna Fanni:
Thank you. Over maybe to you, Shimona.

Shimona Mohan:
Thank you so much for those questions. And I think these are sort of the cardinal questions that we also have to ask ourselves when we research around military AI and ethics. On your first question about the balance between automation and ethics, I think that’s a very, very pertinent question because that’s also something that the explainable AI domain is sort of struggling to contend with. In fact, the performance and explainability trade-off is something that’s very well-established within the AI and machine learning space, which tends to the fact that the more explainable, or let’s in this case say the more ethical your system is, the less it would be, the less performative it would be, or the less capable it would be in terms of its performance levels. So there is this sort of idea established which sort of pits these two values against each other. My personal take would be that it probably is a false dichotomy. There’s definitely a lot that we’re looking into, which sort of makes sure that we’re not compromising on one aspect of a weapon systems to ensure that another aspect is fulfilled. So in an ideal scenario, of course, this would not even be a question where you would always go for the ethical point over the performance factor. But because this is a realistic question, I think the idea is more around ensuring that these systems have and retain their level of performance, while also having an add-on of ethical, or responsible, or explainable AI systems attached to them. Of course, how well they are ensured is something that only the country’s military systems know about, because this kind of information is usually classified, or is behind a number of barriers when it comes to weapons testing, et cetera. But the idea would definitely be to make sure that we’re not compromising on one for the other. And I think policy conversations are also going according to that tune itself, that we’re not policing your capacity to build your weapon systems to its fullest capacity, but we’d also like to make sure that these particular systems are ethical enough to send out into the world without causing undue harm. So as of this point, that’s where the conversation is stuck, of course. As and when we advance more in this field, we’ll have a lot more nuanced ideas around where this particular balance stands at that point. On your second question, I think Pete sort of summarized it perfectly, and I have very little else to add, except for the fact that maybe in terms of casualties, we’re still looking more towards civilian AI systems being employed rather than military AI systems. Of course, this line is blurred in a lot of places, but for example, facial recognition systems are a good example of a dual use technology. And these systems have been employed in, for example, the Russia-Ukraine conflict, where soldiers were sort of identified through these facial recognition systems, and then their remains were sort of transported to either side. So there are a lot of these, so to speak, civilian AI applications which are being employed in conflict spaces. Whether or not they minimize civilian casualties is still a larger question that we’re contending with.

Rosanna Fanni:
Thank you. And the last word to Fernando.

Fernando Giancotti:
Thank you, Eliud, for your great questions. Very quickly, the research I mentioned before, and by the way, I want to thank the Center for Defense Higher Studies for having sponsored this research, has a table, and so if you go on the LinkedIn profile of Rosanna or mine, you will find this table with five examples of ethical principles which have been developed by UK, USA, Canada, Australia, and NATO, which talk basically about human moral responsibility, meaningful human control, justified and overrideable uses, just and transparent system and processes, and reliable AI systems. So these are, as we said, principles which have been developed by single nations, and I just got kind of a summary because they are different, okay? They are not the same on the table. Now the problem is to get, let’s say, a more general framework, as we said, which will have to be negotiated, and that will not be easy. So for the collateral damage there, I can speak with cognition because I can guarantee you that when I talked about operationalizing the international humanitarian law, there is a process with specific, process and procedures with specific rules and specialized legal advisor which evaluate the compliance and, let’s say, clear the commander decision to engage. In some cases, I can tell you that, it’s not a classified information, we had drones for 48 hours over an area to observe movements before deciding to engage. So this means that in today’s system already, this is, let’s say, this issue is of a high priority. That doesn’t mean that there are never mistakes, unfortunately. The AI, if it is used with the money in the loop, can help doing better. I can tell you that at this point, at this stage of the game, I’ve heard nobody saying that they would relinquish the final decision to the machine. I think we cannot think that. We cannot trust AI to drive a car, which is a simple task. Can we trust it to do much more relevant things?

Rosanna Fanni:
Yes. Okay, thank you. Thank you very much. Being mindful of the time, we are already three minutes over time. I would conclude the session now, saying that I think we answered some questions, but we have added probably a lot more questions during the conversation. So, yeah, feel free to reach out to the three speakers. You can find them all, I think, on LinkedIn, and they’re always more than happy to engage on the topics. Feel free to connect. And also, my colleague Paula has, or my former colleague Paula, has put the link already in the chat to the study, so you can also retrieve it and read it on your own. The case study that Fernando and I co-authored. And yeah, with that, wishing everyone a great rest of your day or evening, wherever you are. And thank you a lot again for your attention.

Audience

Speech speed

166 words per minute

Speech length

436 words

Speech time

158 secs

Fernando Giancotti

Speech speed

120 words per minute

Speech length

2087 words

Speech time

1046 secs

Paula Gurtler

Speech speed

187 words per minute

Speech length

516 words

Speech time

166 secs

Pete Furlong

Speech speed

167 words per minute

Speech length

1261 words

Speech time

454 secs

Rosanna Fanni

Speech speed

171 words per minute

Speech length

2326 words

Speech time

817 secs

Shimona Mohan

Speech speed

169 words per minute

Speech length

2911 words

Speech time

1032 secs

All hands on deck to connect the next billions | IGF 2023 WS #198

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Joe Welch

Disney Plus is a platform that focuses on creating and distributing amazing content to drive demand. One of their key strategies is to bring local, in-country content to their platform, recognizing the importance of cultural relevance. This approach has been successful in regions such as India and Asia, where Disney has been able to produce vibrant content specifically tailored to the preferences and interests of local audiences. For example, the Korean show “Moving” has gained popularity across Asia and on Hulu in the US, showcasing the effectiveness of adding local content to platforms like Disney Plus.

In addition to emphasizing local content, Disney Plus also places a strong emphasis on being a good partner in the communities it enters. They do this by actively cooperating with in-country telcos, creative industries, and policymakers. By forming partnerships with these stakeholders, Disney Plus aims to integrate itself into the fabric of the community and contribute positively to its development. This approach not only helps them establish a strong presence in the market but also fosters collaboration and mutual benefit.

Furthermore, Disney Plus is actively engaged in projects that support digital literacy and online safety. They work with governments and NGOs in 20 countries to fund and implement initiatives aimed at improving digital literacy skills and promoting online safety practices. One notable example is their partnership with Indonesian NGO ganara, where they use old-school art techniques to teach digital literacy. Another example is the Latin American project Chico net, which trains teachers in imparting digital literacy to their students. Through these projects, Disney Plus demonstrates its commitment to advancing education and ensuring that individuals are equipped with the necessary skills to navigate the digital landscape safely and responsibly.

Joe Welch, the speaker, highlights the effectiveness of hands-on projects like those undertaken by Disney Plus in increasing digital literacy. He acknowledges the importance of engaging directly with communities and utilizing creative approaches to impart knowledge and skills. This approach not only enhances learning outcomes but also fosters active participation and empowerment among individuals.

Additionally, Joe Welch affirms the value of a multilateral, multistakeholder approach. He emphasizes the need for collaboration and participation from various sectors, including academia, civil society, and industry. Through active involvement and open dialogue, this approach allows for a holistic and comprehensive understanding of issues, enabling more effective solutions to be developed. Joe supports the idea that all stakeholders should have a voice and actively participate in decision-making processes.

Furthermore, Joe Welch emphasizes the importance of inclusion and self-representation in decision-making processes. He shares a South African quote, “Nothing for us without us,” which signifies the need to include and empower all individuals, particularly those from marginalized groups. Joe plans to incorporate this principle into his future presentations, recognizing the transformative power of inclusivity and the valuable insights that can be gained by giving everyone a seat at the table.

In conclusion, Disney Plus is a platform that not only focuses on creating and distributing amazing content but also prioritizes bringing local, in-country content to its platform, being a good partner in the communities it enters, supporting digital literacy and online safety projects, and advocating for a multilateral, multistakeholder approach and inclusion. Through these initiatives, Disney Plus demonstrates its commitment to providing high-quality entertainment while positively impacting the communities it serves. Joe Welch, in his analysis, highlights the importance of hands-on projects, active participation from all stakeholders, and the value of inclusion and self-representation in decision-making processes.

Audience

Disney is driving global demand for its content through its streaming service, Disney Plus, by offering a wide range of content from renowned franchises such as Lucas, Marvel, Pixar, Disney, and Nat Geo. This strategy aims to captivate audiences worldwide and cater to diverse preferences and interests. Additionally, Disney recognizes the value of creating local, in-country content to enhance its global reach. In India, Disney operates under the Star brand and produces vibrant content specifically tailored for Indian viewers. This approach has contributed to Disney’s strong presence in the Indian market. Moreover, the success of local content is evident in other regions, such as Uganda, where positive audience response highlights the significance of local language production. The audience resonates with content produced in local languages, emphasizing the importance of representing and embracing local culture. Disney’s strategy of combining global and local content has been effective in driving global demand and fostering cultural diversity and inclusivity. Disney continues to strengthen its position as a global entertainment powerhouse by delivering compelling and culturally relevant content.

Michuki Mwangi

Expanding internet connectivity is a complex task that requires innovative approaches, responsive to the needs of local communities. Traditional models face challenges, particularly in terms of business operations and return on investment. To bridge this gap, it is necessary to establish connectivity based on the realities of people living in remote and underserved areas.

Community networks, owned and developed by local communities, are a viable solution for expanding connectivity. A portion of the fees paid for connectivity is reinvested within the community, promoting further development. These networks have the flexibility to adapt to any technology that best serves the community’s needs.

For community networks to succeed, a supportive policy and regulatory framework is essential. Countries must develop policies and regulations that recognize new models and access solutions. Access to funding and rights-of-way for infrastructure construction are key considerations.

Relevance of internet services is another crucial aspect of expanding connectivity. Addressing low incomes and ensuring the value and relevance of available content is important for individuals considering investing in connectivity.

Efforts to increase internet access have already been identified, and now it is crucial to scale up these efforts. The Internet Society is willing to support this movement, as solutions to connect more people already exist. Increasing funding for deployment is an essential step towards scaling up efforts and achieving widespread connectivity.

Partnerships and collaborations are also necessary for success in the mission to connect everyone. Almost every panelist agrees that this mission cannot be accomplished individually, highlighting the importance of increased partnerships.

In conclusion, the 2030 vision of universal internet connectivity is achievable. By implementing innovative approaches, supporting community networks, developing a supportive policy framework, ensuring relevance of internet services, scaling up efforts, and fostering partnerships, significant progress can be made towards achieving universal connectivity.

Rose Payne

Almost one-third of the global population, approximately 2.6 billion people, is still without internet access, highlighting the persistent digital divide and connectivity gap. Moreover, the disparity between men and women in terms of online access is actually increasing, which is a concerning trend.

Meaningful connectivity goes beyond having the necessary infrastructure and devices. Factors such as the availability of relevant services and content, users’ digital skills, and security and safety while using technologies play a crucial role.

Various stakeholders agree on the urgent need for practical solutions to bridge the digital divide and facilitate universal connectivity for all. This necessitates finding actionable and concrete measures rather than just discussing problems. A panel discussion involving experts in policy and technology aims to identify and implement such solutions.

Governments have a significant role in addressing the digital divide by improving digital skills for individuals and themselves. Special attention should be paid to rural areas, which often face greater connectivity challenges.

It is important to emphasise and defend the multi-stakeholder model, which encourages collaboration and partnerships between various sectors. This model has proven effective in achieving connectivity goals.

Adopting a holistic ecosystem approach is crucial in addressing the underlying causes of the digital divide. By considering all aspects of the ecosystem, comprehensive solutions can be developed.

Improving “cyber hygiene” skills is also important, which involves educating individuals on safe and secure internet practices. Governments have a critical role in promoting and supporting such initiatives.

In conclusion, the statistics highlighting the lack of internet access for almost one-third of the global population underscore the urgent need to bridge the digital divide and ensure universal connectivity. Achieving meaningful connectivity requires addressing various factors such as relevant services, digital skills, and security. Governments, policymakers, and stakeholders must collaborate to find actionable solutions and adopt a holistic ecosystem approach, creating a more inclusive digital future for all.

Giacomo Persi Paoli

Digital technology has the potential to drive significant economic, social, and societal transformation, helping to accelerate progress towards the Sustainable Development Goals (SDGs). One speaker highlights the importance of using digital technology responsibly and safely to harness its transformative power. It is emphasised that responsible and safe use of technology is explicitly linked to the potential for digital technologies to accelerate progress towards the SDGs. This indicates the need for individuals and organisations to be mindful of the ethical and security considerations associated with digital technology.

Trust in technology, companies, and the government is identified as a critical factor for effective engagement with digital technology. Building trust is particularly important in fostering digital inclusion. By instilling trust in users, it becomes easier for them to engage with technology and leverage its benefits.

Another area of concern is government preparedness against cyber threats. A report by the Economic Commission for Africa reveals the high cost of unpreparedness in cybersecurity, estimated to be as high as 10% of national GDP. This highlights the importance of governments prioritising cybersecurity measures and investing in necessary infrastructure and expertise to mitigate potential threats.

Investments in skills training, especially in the field of cybersecurity, are deemed necessary. Such investments are not only crucial for enabling users to engage safely online but also help address the global shortage in the cybersecurity workforce. By equipping individuals with the necessary skills, they can effectively navigate the digital landscape and contribute to a safer and more secure online environment.

Connectivity is identified as a catalyst for new discoveries, innovations, and learning opportunities. The transformative potential of connectivity is emphasised, suggesting that it should be seen as a new beginning rather than the end of the journey. However, the importance of preparedness and investment in connectivity is emphasised, as changes brought about by connectivity must be anticipated and adequately addressed.

The need for upskilling is highlighted, particularly in the context of the anticipated influx of 2.4 to 2.6 billion new internet users. These individuals may lack the necessary skills for safe and responsible internet use. Therefore, efforts should focus on upskilling them to ensure that they can make the most of the opportunities provided by digital technology while staying safe and responsible online.

Governments are encouraged to improve their digital literacy and knowledge skills. As digital transformation affects not only citizens but also governments, it becomes crucial for governments to be digitally literate and engage with other governments on an equal footing. This highlights the importance of collaboration and partnerships in achieving the SDGs.

The importance of skills development is emphasised as a key pillar of preparedness for a more connected world. Addressing skills gaps is seen as essential in effectively navigating the challenges and opportunities presented by digital technology and connectivity.

Overall, responsible and safe use of digital technology, building trust, government preparedness, investments in skills training, connectivity, upskilling, and government engagement are crucial in harnessing the power of digital technology to drive positive change and contribute to sustainable development.

Onica Makwakwa

The Global Digital Inclusion Partnership aims to advance meaningful connectivity, especially in rural areas, through collaboration among stakeholders at national, regional, and global levels. By leveraging multi-stakeholder partnerships, the partnership seeks to bridge the digital divide and enhance digital inclusion.

The digital gender gap remains a significant obstacle, with an estimated cost of a trillion dollars in GDP over 10 years in 32 low and middle-income countries. Addressing this gap is crucial for economic growth and reducing inequality.

Affordability of internet access is another key challenge, with the current standard of 2% for one gig per month considered inadequate. Ensuring daily internet usage and affordable devices, particularly in Africa, is essential for achieving meaningful digital connectivity.

Supporters of the Global Digital Inclusion Partnership advocate for a policy approach to tackle these issues. They emphasize the need to raise standards of affordability and speed, invest in digital skills, and promote public-private partnerships. Mainstreaming gender in ICT policies using frameworks like REACT is also crucial for inclusivity.

Improved connectivity and digital skills have a positive impact on women. Internet access enabled women to transition their businesses online during the COVID-19 pandemic, preserving their income. Women with internet access were also more likely to complete online courses to upgrade their skills. Bridging the digital divide empowers women economically and improves their financial stability.

Promoting a broader conversation on digital skills, including coding, online business management, and mobile money operations, is vital for work and economic growth in the digital era.

The existing gaps in digital connectivity are a result of policy choices. Collaborating with policymakers to enact corrective measures and narrowing these gaps is essential. Engaging rural communities in the development of broadband policies ensures an inclusive approach.

Embedding meaningful connectivity indicators with key ICT statistics helps monitor progress and evaluate the impact of digital inclusion initiatives. Public access solutions play a crucial role in providing affordable resources to rural and remote areas, contributing to reducing inequalities in connectivity.

In conclusion, the Global Digital Inclusion Partnership, through its multi-stakeholder approach and policy interventions, aims to advance meaningful connectivity and bridge the digital divide. Addressing the digital gender gap, improving affordability and speed, fostering digital skills, engaging policymakers and rural communities are crucial steps for achieving equitable and inclusive digital development.

Atsuko Okuda

The analysis emphasises several significant points regarding the digital divide and internet connectivity. Firstly, it notes that the rate of connecting the unconnected is slowing down, which is a concern. It is estimated that by 2023, approximately 2.6 billion people will still be without internet access. This highlights the need to accelerate efforts in bridging this divide and ensuring universal internet access.

Another important issue identified is the affordability of broadband services. In some countries, the cost of accessing the internet is too high, acting as a major barrier to its adoption. Affordability is measured using a 2% GNI per capita benchmark, and many countries fall short of this benchmark. This finding emphasises the importance of addressing the financial constraints faced by individuals and communities in accessing the internet.

To address the digital divide, a whole-of-government and whole-of-society approach is advocated. Initiatives are being implemented in various countries to provide essential services such as education, health, and commerce. These initiatives are based on common building blocks, such as national identification systems, which facilitate the delivery of these services to underserved populations. The International Telecommunication Union (ITU) is leading this approach, with approximately 15 countries already participating in the initiative.

The impact of technological developments, such as artificial intelligence (AI) and data analytics, is also highlighted. These advancements are rapidly transforming both connected and unconnected communities. For instance, the new normal includes AI solutions and data-intensive decision-making in areas like e-commerce, traffic management, and mobile banking. Consequently, there is a demand for new knowledge and skills to effectively navigate this evolving technological landscape.

The analysis underscores the need to reassess the concept of digital literacy and digital skills in the new normal. With jobs becoming redundant or created due to technological advancements, individuals without the necessary digital skills may face difficulties in the job market. This finding suggests that a comprehensive approach to education and training, focusing on digital literacy and skills, is crucial for individuals to thrive in the digital era.

Lastly, the analysis highlights that partnerships play a critical role in addressing challenges related to digital skills. Collaborative efforts between various stakeholders, including government, the private sector, and civil society, can contribute to the development and implementation of effective strategies in closing the digital skills gap.

In conclusion, the analysis underscores the urgent need to bridge the digital divide and ensure universal internet connectivity. Affordability remains a major obstacle, while a whole-of-government and whole-of-society approach is necessary to address this issue. The rapid pace of technological advancements calls for new knowledge and skills to adapt and thrive in the digital era. Additionally, it is crucial to reassess the concept of digital literacy and skills and foster partnerships to tackle challenges related to digital skills. By addressing these areas, we can work towards a more digitally inclusive society.

Takashi Motohisa

Amazon’s Project Kuiper is a groundbreaking initiative aimed at addressing the global issue of limited broadband access. The project plans to achieve this goal by deploying over 3,200 satellites in low-Earth orbit. These satellites will provide internet connectivity to underserved and unserved communities worldwide. Project Kuiper’s mission is to ensure that these communities have access to internet speeds and latency that are on par with terrestrial networks, thus bridging the digital divide.

Additionally, Project Kuiper aims to assist wireless carriers in extending their LTE and 5G networks to new regions. This collaboration will enhance network coverage and enable more people to connect to the internet.

Engineers at Amazon have also introduced three innovative customer terminal models as part of Project Kuiper. The largest model can deliver speeds of up to 1Gbps, while the smallest ultra-compact model provides speeds of up to 100Mbps. These terminals will play a crucial role in providing reliable internet access to customers in remote and underserved areas.

The project is progressing steadily, with the recent launch of the first two prototype satellites. Service delivery to customers is expected to begin in late 2024, bringing internet access to communities that have long been left without.

Recognizing the importance of partnerships, Amazon has invested $10 billion into Project Kuiper. This investment highlights Amazon’s commitment to bridging the digital divide and expanding internet access. Key partnerships have already been formed, including with Vodafone and Vodacom, with more expected in the future.

Takashi Motohisa, a prominent advocate for bridging the digital divide, strongly supports Project Kuiper and emphasizes the significance of technological advancements in addressing the issue. Motohisa’s endorsement reinforces the project’s dedication to its mission and underscores the importance of initiatives like Project Kuiper in creating a more connected and inclusive world.

In conclusion, with the deployment of thousands of low-Earth orbit satellites, Amazon’s Project Kuiper aims to revolutionize global broadband access. The project’s commitment to providing comparable internet speeds and latency to terrestrial networks is crucial in bridging the digital divide. Through partnerships and the support of leaders like Takashi Motohisa, the project represents a significant step towards a more connected and inclusive world.

Pablo Barrionuevo

The focus of the conversation on digital inclusion has shifted from the connectivity gap to the usage gap. Currently, around 3.2 billion people have access to mobile broadband connectivity but remain unconnected to the internet. This lack of connection can be attributed to factors such as affordability, lack of skills, lack of trust, and gender disparities.

Addressing this issue requires forming partnerships, as no single entity can connect the unconnected on their own. An excellent example of a successful partnership is Internet para todos, a collaboration between Telefonica, Meta, and the Inter-American Development Bank in Peru. Such partnerships bring together different stakeholders and resources to bridge the digital divide.

Additionally, there is a belief that we have the necessary technologies to connect everyone. Technological barriers do not pose the main obstacles to achieving digital inclusion. However, it is crucial to acknowledge that what works in one location may not work in another. Adaptability and flexibility are essential in finding the right solutions for each specific context.

Collaboration is also vital in connecting the unconnected. Collective efforts and cooperation can have a significant impact on digital inclusion globally.

Pablo Barrionuevo, an advocate for broader connectivity, supports the idea of flexible and localized solutions. He emphasizes the importance of collective efforts in addressing this issue. It is evident from his stance that a combination of technology, flexibility, and collaboration is fundamental to establishing inclusive and sustainable connectivity for all.

In conclusion, the conversation on digital inclusion has evolved to consider the usage gap as well as the connectivity gap. Approximately 3.2 billion people lack internet access despite having access to mobile broadband connectivity. Partnerships, flexible solutions, and collective efforts are necessary to connect the unconnected. While the required technologies are available, they need to be adapted to suit specific contexts. By working together, we can ensure that no one is left behind in the digital era.

Session transcript

Rose Payne:
Great. I think it’s time to kick off. So first of all, thank you, everyone, for joining us. I’m Rose Payne. I’m the Digital Policy Manager at the International Chamber of Commerce. We represent 45 million businesses worldwide. So that’s everyone from your huge multinational corporations down to MSMEs. So I’m from pretty much every single sector. So just first of all, some housekeeping. As you can see, we have two mics. In traditional IGF fashion, I’ll ask you to queue up behind them when we have the Q&A session, which will be towards the end of this panel. Great. So this panel, which is entitled All Hands on Deck to Connect the Next Billions, will take a deep dive into connectivity and digital divides. Today, almost a third of the world, some 2.6 billion people, remain offline. We’ve made huge strides in expanding connectivity, but clearly, we still have a long way to go. So this issue has deep-rooted causes. And it’s important to understand that it’s not just a matter of people being able to connect. So earlier in the conference, Doreen Bogdan-Martin, the Secretary General of ITU, mentioned that the proportion of women relative to the proportion of men who are offline is actually increasing. The persistent gender digital divide shows that the reasons for digital exclusion are complicated. This is as much of a social problem as a technological challenge. Do people have the skills to use digital technologies? Are there relevant services and content that they want to use that motivates them to be online? What are their experiences like when they use technologies? Do they feel safe and secure? That’s why we often refer to meaningful connectivity. Infrastructure and devices are really one part of the puzzle. So we’re going to start with an exploration of where we are today. Where does the connectivity gap actually stand, and why? Why is there this persistent digital divide? This session isn’t just about discussing problems. It’s also about finding actionable solutions. So that’s what we’ll discuss next. We’ll identify the right policy environment that encourages investment and how to create cross-sector partnerships. This workshop brings together experts in policy and technology who are dedicated to delivering universal connectivity using various technologies, economic and business models, and policy and regulatory approaches. Our goal is to learn from their experiences and discuss concrete solutions that can be applied or scaled up. to ensure meaningful connectivity for everyone, everywhere. We’re very lucky to be joined by these experts who work with the technology actually delivering connectivity, people who are carrying out essential research to help us understand barriers, and people delivering programs that help to overcome them. So, without further ado, I’m going to introduce you to our panel today in the order that they will speak. So, joining us online, we have Atsuko Okuda, who works for the International Telecommunications Union, Regional Office for Asia and the Pacific, where she is the Regional Director. We have Takashi Motohisa, Manager of International Regulatory Affairs for Project Kuiper at Amazon. We have Pablo Barrio Nuevo, Public and Corporate Affairs Manager at Telefonica. We have Joe Welsh, Vice President of Global Public Policy for the Walt Disney Company, who focuses on the Asia-Pacific. We have Machuke Mwanga, Distinguished Technologist for Internet Growth at the Internet Society. We have Anika Makwakwa, Co-Executive Director at the Global Digital Inclusion Partnership. And finally, we have Giacomo Persi-Paoli, Head of the Security and Technology Program at the United Nations Institute for Disarmament Research. Great. So, to begin, Atsuko Okuda, who’s joining us online, will begin with a presentation on the state of digital inclusion today. Atsuko, I hope you can kick us off now.

Atsuko Okuda:
Thank you very much. First of all, I would like to thank our appreciation to invite ITU to this very important session, and I hope that we can support and contribute with the statistics and analysis that we have done for this session. Now, next slide, please. I would like to start with a very short presentation on what ITU is. ITU, as many of you may know, is the oldest UN agency specialized in information and communication technologies, and we have three distinctive areas of specialties. One is the radio communication, dealing with the spectrum allocation as well as satellite orbit. Second, standardization, supporting the SMEs and industry to develop the technologies, including emerging technologies. And third, assisting member countries to connect the unconnected and meaningful digital transformation. Next slide, please. So I would like to start with the bigger picture on SDGs, and I’m not sure if participants have seen this slide in the previous sessions, and this is the highlight of where we are in terms of achieving the SDGs by 2030. In Asia and the Pacific, we have passed already 2022, but we are nowhere near, and it’s unlikely that we can achieve all SDG goals by the time. One SDG goal, we are seeing even the regression. So there is a high expectation that digital technology and connectivity will really accelerate not only the connectivity part, but through connectivity, the achievement of socioeconomic development. Next slide, please. So to answer your question in terms of where we are, according to the latest statistics, it is estimated that there will be 2.6 billion people still offline as of 2023. Now, for some of us who have been following the numbers, there have been steady improvement in the numbers. The previous number last year was 2.7. So you can see a significant progress. However, the two years before that, during the COVID, we have seen a much, much faster progress. Almost 800 million people joined during the short period of two years under COVID. So we can see that this pace of connecting the unconnected is slowing down. And I believe that this is one of the concerns that we have and we share across the globe. Next slide, please. So the next slide goes a little bit deeper into the digital divide and how it could look like. As you know, ITU collects the various aspects of ICT development globally and time series. So this is one presentation on the data analysis that we have done. We can clearly see that these regional variation globally, as well as gender gap between men and women and how many internet users are there in each group, as well as the affordability gap. These are some of the prominent features of our digital divide that we have. Next slide, please. We also have a very clear urban-rural divide as well as generational divides. And this is the granular view of internet users and what’s the percentage of internet users in each region, as well as per income groups. And you can clearly see that low-income countries have much, much less internet users. And the same applies to LDCs and LLDCs, which is the least developed countries and landlocked developing countries. Next slide, please. And I would also like to go a little bit deeper into the affordability. As you may know, the broadband service affordability is measured. There’s a benchmark, 2% GNI per capita. And over and above that, it is considered not affordable. And below that, a number of population can access and enjoy the service. And this is a snapshot in terms of which countries have affordable and unaffordable mobile broadband baskets as of 2022. And we can see that those countries with unaffordable broadband services have much, much less internet users. Next slide, please. So we believe that the challenges of connecting the unconnected, as you have seen in the slide, in the previous graphs and charts can be summarized in this problem and solution trees. Of course, this is a high-level summary. You may have different perspectives as well as the elements, but from where we see to connect the unconnected in remote and rural areas, we believe that their actual investment, physical connectivity issue, affordability issue on the part of consumers, as well as digital skills and the lack of services and applications that can bring the concrete digital benefits to the communities and population. Next slide, please. So in order to turn those challenges into opportunities, we believe that perhaps, and in the context of slowing down of our progress in connecting the unconnected, perhaps we need a qualitatively different approach to the issue to narrow the digital divide. And one of the approaches that we have been advocating is this whole-of-government approach and whole-of-society approach, because what is plaguing the smaller economies in particular is a siloed approach that to connect the unconnected schools or hospitals or farmers, we have different initiatives, but we believe that by connecting these different groups and sectors, perhaps we can create efficiencies and economies of scale. And there’s a lot we can gain by collaborating and through partnership. So this slide shows what that means completely as an ICT initiative. And we start with SDGs on the left-hand side. And in the middle, in the capital, in the center, in the government, we believe that we can create a whole-of-government and digital government services that can build to support the education, health, and agriculture without breaking the silos and without creating separate systems and infrastructure. And we hope that that will be delivered to the communities and people through smart cities and smart village and smart islands. And we hope that there will be a smooth transition from SDG to the actual benefit to the communities. Next slide, please. So what does then the smart village, smart island could look like? And this is a very high-level overview and summary. So as I said, we need a whole-of-government approach in the center that can provide education and health and commerce, agriculture, and so forth based on a common building blocks, as you see on the left-hand side, which could be a national ID. It could be one payment mechanisms. One could be a messaging. Now that will be translated into the village in the middle of the slide, which is a low-cost, scalable, and multi-sector collaboration platform that is within the remote village and islands that will provide e-health and e-agriculture and those services to the people. Next slide, please. So in ITU, as an example, we have about 15 countries that we are rolling out this initiative, and I’ll be happy to provide more details later on. Next slide, please. So this is my last slide. I believe that in order to materialize the whole-of-government approach and whole-of-society approach requires also a whole-of-support approach, the partnership among all of us, so that when we see an opportunity for synergies and partnership, we can join hands to make sure that there will be one, perhaps, solutions to address the issue instead of two or three different solutions. And we believe that through this partnership and joining hands, we can provide a qualitatively different support to the member countries and target communities. Next slide, please. So thank you very much for giving me this opportunity, and I have a QR code in case you are interested in knowing these initiatives, and I’ll be happy to answer any questions you may have. Back to you. Thank you.

Rose Payne:
Thank you so much. I think that that was a really fantastic framing of not, first of all, why this is so important. Connectivity doesn’t just lead to economic growth. It also can help to achieve global goals and also potentially help us get back on track with the SDGs. We also heard that the kind of complexity of this issue and the need for really innovative solutions, which take an ecosystem approach. So I’m going to ask the rest of the panelists for their reactions, starting with Takashi. So you work for Amazon’s Project Kuiper, who are addressing how to connect some of those really hard-to-reach areas using low-Earth-orbit satellites. What can you share about how the private sector is really innovating technologically to close the gap? Takashi.

Takashi Motohisa :
Thank you. Yeah, first of all, I would like to state that I’m very excited to be part of this session, and then thank you for giving me this opportunity to discuss how to bridge the digital divide globally with the experts from the many variety of the field. That’s a very fantastic chance for me because bridging the digital divide. is exactly the mission of the Project Kuiper that Amazon is working on. And then this is also my personal motivation and then personal reasons that I’m working for in this project. So, yeah, let me talk about the Project Kuiper as an example of the private sector is working for this bridging the digital divide. And I think, yeah, thank you. Yeah, please go back to the, thank you. Oh, that’s good. Please go to the next slide, thank you. And Project Kuiper is, sorry, sorry. Go back to the second slide. Thank you. This is Project Kuiper’s mission. Project Kuiper is an initiative to increase global broadband access through the satellite constellation of these more than 3,200 satellites in low-Earth orbit. Its mission is to deliver fast, affordable connectivity to unserved and underserved communities around the world. The Kuiper system will deliver fast and latency on par with the existing terrestrial network. And like many other Amazon product and services, Project Kuiper is designed to affordable for the customers because we want to be accessible as many customers as possible. Please. go to the next slide. Project Kuiper will serve individual households as well as school, business, hospitals and other organisations operating in the locations without reliable broadband services. And Project Kuiper will also provide backhaul solutions for the wireless carriers to extend its LTE or 5G network to new regions. We will deploy more than 3,200 satellites in low-Earth orbit at three altitudes over 590km, 610km and 630km. Coverage of Project Kuiper will be 56ยฐN to 50ยฐS of the equator, which allows us to reach about 95% of the world’s population. And then the satellite relay the customer data traffic to our ground infrastructure on the Earth and then connecting to the internet, public cloud and private networks. This is how Kuiper network works. In March, we revealed our three customer terminal models, which are groundbreaking in terms of performance and affordability. These state-of-the-art antennas designed by the Amazon engineers include the most smallest one, ultra-compact one, which has only 80cm2 antennas. I believe it’s very incredible engineering. And then it can deliver up to 100Mbps. And then largest one will deliver up to 1 gigabps speed. Please go to the next slide. Last Friday, our launch partner at the United Launch Alliance successfully launched Kuiper’s very first two prototypes with the Atlas V rocket. This was a very, very one of our key milestones. We are running up our satellite manufacturing facility and will begin launching production satellites next year. So we can start to deliver service to the earliest customer by late 2024. That’s an overview of our project.

Rose Payne:
Thank you so much. So now that we’ve heard a little bit about the technological kind of innovation that can help close some of those gaps, I would like to move to usage and how we can address the usage gap. So Pablo, I think that you’re online. I hope that you have the ability to unmute yourself, if not, send me a message.

Pablo Barrionuevo:
Yes. I think, can you hear me?

Rose Payne:
We can hear you now. Fantastic. Great. So could you share a little bit about what Telefonica is doing to address usage?

Pablo Barrionuevo:
Yes, thank you. Thank you for the opportunity to participate in the panel. I’m very pleased to be in the IGF again. Well, I think that the first idea I would like to transmit is that the, well, the first impression when we talk about digital inclusion is that it is a problem of access. And it is true that the telecommunication networks are the backbone of our societies and economies. And without access, we cannot, we don’t have anything. So the first step are the infrastructure. are not the telecommunication networks. But the truth is that I believe that in the last years we’ve been observing an evolution in this conversation about digital inclusion. And I think that what we are observing is that there is a switch from the connectivity gap to the usage gap. Following the numbers presented by Atsuko at the beginning, the truth is that now we have 3.2 billion people that are under the footprint of a mobile broadband connectivity and do not connect to the internet. And so I think that the question we have to answer is why do we still have people that are under the footprint of connectivity does not connect. And this is the usage gap. And so it is important to understand what are the reasons for these people to not connect. We’ve talked about affordability. That’s one, of course. Maybe they don’t connect because they don’t know how to connect it. And so we have to improve the skills. Maybe they don’t connect because they don’t trust. And so we have to work into the confidence and build digital trust. There are many reasons. Also, the gender gap has also been mentioned. But I would like to underline this idea of a swift interconversation on digital inclusion from the connectivity gap to the usage gap. The second idea I would like to transmit is that I think that one of the learnings of the last years and connecting the unconnected is that none of us can do it by ourselves. This is an idea transmitted by Atsuko also. And this is the idea of partnerships. We need all hands to connect the unconnected. Of course, the telecom operators, but also the governments and other stakeholders. I would like to simply mention an example, which is a use case we’ve put in place in Peru, which is called Internet para todos, IPT, which is a collaboration, a partnership between ourselves and Meta and Inter-American Development Bank. This is just an example of this kind of partnerships. So, as a conclusion, first idea, the conversation on digital inclusion has evolved from the connectivity gap to the usage gap. This is where now is the problem, in my opinion. And second, the idea of partnerships. We need everybody to work in the in the same direction to connect the unconnected. Thank you. Back to you.

Rose Payne:
Great, thank you so much for that. Joe, we’ve covered access, we’ve covered usage, but we’ve also heard that that’s just one part of the puzzle. So how could you could you talk to us a little bit about how partnerships can also address that question of skills?

Joe Welch:
Yes, I can. Is the microphone on? Yeah. First, thank you, Japan, for having us here. It’s just an amazing event. It’s my first IGF, go figure. And the weather is fantastic, makes it even nicer. And thank you, ICC. And thank you, fellow panelists. As Disney, I’m a little bit humbled to be on with with Kuiper. Oh my god, the Leo thing is finally happening. There’s competition for Elon Musk. It’s just amazing. And of course, the ITU presenter framed the problem for us, which was wonderful. Telefonica goes out and actually builds connections, respect. So I’ll just try and do the humble Disney contribution to the discussion. So before I get to partnerships, I’m going to back up and ask the question. You know, what what do we do? We don’t build connections. We’re not a satellite company, or a telco. So what do we bring to the table? Well, it we try to bring the demand side to the table, right? So backing up, that means we try to make amazing content and then put it on the internet and drive demand. That’s our thing, I think that’s why we’re on this panel, is that reason. So enter Disney Plus, which launched in 2019, 20, 21 around the world, Middle East, Africa, Asia, Europe, etc., and so that’s where we can put our product online and help drive demand in particular. 60 countries plus, the global content is there, the Lucas, the Marvel, the Pixar, the Disney, the Nat Geo as well. And then that takes us to, even better, is when you can add in-country content, right? So then, Disney Plus allows us to bring, sorry, to make local. Kinda hate that word, right? It sounds almost pejorative, so I’ll call it in-country content. Content in the language of that country that we create, we can now do that. We’re playing in that space, we have a Korean show we just made called Moving. It’s a Korean superhero espionage. It’s doing really well all across Asia, it’s doing well on Hulu in the US. So it’s not only Korean content in Korea, it’s exporting Korean content, which is even better, right? Driving demand around the world. India, we do great in India. We had the Star brand in India for a couple decades. I’m sure anyone from India, it’s really vibrant Indian content, Indian people making Indian content for Indian people. We’ve been doing it for a long time. Now we can do it through Disney Plus. We have scale in that market. It’s India’s credit that they’ve let us in, right? There’s another large country that doesn’t let companies like mine in. And there’s a Gandhi quote, which goes, I’ll paraphrase, it goes something like this. It’s, you should have a house that is built strongly enough so that you can throw open the windows and the door. and let the breezes come through, so apply to our industry that we’re allowed to come in and make content in the language. It’s fantastic. All right, I’m now gonna segue into three questions for the room. We can, panelists included, Rose, you too. If you answer the three questions collectively, then, then what? I will give everyone a Disney token or a tchotchke of some source. So I’m gonna read a quote, and if you can identify the quote, this picks up on the in-country language, if you can identify who said the quote, that’s the first question, and then there’ll be two following questions. So the quote is, if you speak to a person in a language they understand, you speak to their head. If you speak to them in their language, their language, you speak to their heart. You don’t have to raise your hand, just shout it out. Anyone? Very famous person. I’ll give you the continent. Africa. Still no one. Mandela. Yes, that’s one. Now that one was easier, the next one’s a little harder, but I think doable. What language was he talking about? For him, for he himself, what language was his first language? Who said that? There you go, Bertram. That’s two. That was the, probably the harder one. This last one’s easy, and it comes back to the Walt Disney Company. Which movie did the character, was Xhosa a significant part? Disney movie. Thank you, Helen. Yes, Black Panther, of course. So that’s our piece of this, is to drive the demand, and if you can do it in the in-country, creating in-country content, then, you know, you’re even more… more home and dry in motivating that demand. I’ll end this part of the panel by plugging another panel. It’s at 5.30, it’s on the main stage. Bertsen, what’s it called again? It’s called the Policy Network on Meaningful Access. Sorry, my voice is gone. And it will feature some producers from Kampala, Uganda.

Audience:
Yes, two female, young female producers, two sisters have been producing local content in the region for the region in local languages, including Luganda, and of course, Swahili.

Rose Payne:
All right, so that’s a way better story than I could think of, and it’s just a nice happenstance. It’s coming up at 5.30 on the main stage. Back to you, Rose. There we go. Oh, yeah. Great. You may find that a lot of people suddenly turn up in the room at the end. Great, so you actually just lined up really perfectly the next person who can join us. So Michuki, I hope that you are able to unmute yourself. So you just touched on the idea of, or the importance of adapting content to the local context. And Atsuko began by breaking down the state of connectivity across countries. That makes it really clear that connecting the unconnected means very different things in different places. I was hoping that you could speak to us a little bit about how we ensure that efforts to expand connectivity are responsive to local specificities and needs. Thank you, and I hope you can hear me. We’re all good, we can hear you.

Michuki Mwangi :
Great, all right. First, I’d like to start by thanking the ICC for inviting us and the Internet Society to this session because it’s a key area of interest for us and a focus area for our work because it’s very much aligned to our vision that the internet is for everyone. Now, to the question on expanding connectivity and the need to make it more aligned to the needs of people in rural and underserved areas. The one thing that we’ve come to realize is that there is a need for innovative approaches that can complement the traditional models for providing access. And because it’s evident that connecting people in rural, remote and underserved areas, and even in some cases, low income areas, presents a challenge to the traditional models that provide connectivity, especially from a business operations perspective and more specifically on the return on investments. So it makes it much harder for the traditional models to extend connectivity to these areas. Now, I’d like to set a perspective or sort of reset our thinking to understand what we mean when we talk about connectivity and what the internet is. Most of us understand that the internet is a network of networks. Essentially, what this means is that we have networks or individual networks that are using different technologies that come and interconnect together to make what we know as the global internet. Now, if we look at this from the context of those people who are living in remote and underserved areas, the way to do this in a sustainable way is to anchor the network or to build a network from those communities and then interconnect with the existing internet. And by doing so, it means that we are having to establish or anchor that connectivity based on the realities of people living in those areas. And that means that we have to consider the various social, economic, and other factors that exist or prevail in the areas that they are in. There was a study that was done or paper produced going back to 1998. And it sort of said that, or the headline here was that the first looking at those areas or the underserved areas as the first mile of connectivity. So today, over years, those areas have been looked at as the last mile, but essentially, if we were to look at them as the first mile, means we’re building from the community outward. Now, there’s a lot of work that has been gone into developing and piloting this kind of approach where you’re looking from the community outbound. There’s been, and those approaches have done deployment and tested this kind of deployments in different environments, topologies, settings, both rural and urban, and as high as 3,800 meters above sea level on the foothills of Mount Everest, where there are some villages that need connectivity. And there’s some work we’ve done there and are trying to learn from that experience. The objective here is to understand the challenges and the opportunities and how we can refine these models to be able to bring connectivity to those who need it the most and make sure that it’s sustainable and scalable. There are a few things that we’ve faced and in helping to design this model to make sure that it is basically adapts to those areas or to those communities that are being connected. And I’ll just like to touch on a few of them. First, we make sure that the model looks at technology from an agnostic perspective. It’s technology agnostic, meaning that it can use and adapt any technology that best serves the needs. It also adapts a non-profit approach in order to address some of the challenges that hinder the adoption. And I think the issue of use has been mentioned by a previous speaker, and basically issues like digital literacy and the need for digital tools and that training, technical support, and so on. So the model that we use when we talk about community networks really helps to address some of these local issues. Affordability is another challenge. And so the model that they use for charging fees is one that’s designed to be beneficial to the community. An important aspect of this is that these networks are less extractive in connecting people. Because the network is owned by the community, it means only a portion of the fees that are paid for connectivity are actually used to extend, to pay for a backhaul capacity. The rest of the funds are actually kept within the community. And then they are used for other elements that are needed for developing the community to be able to take advantage of that connectivity. And most importantly, it is anchored within the social economic pillars of that community. So schools, health facilities, the local government, the local institutions, cultural institutions, and also addressing the arts and entertainment that are key to that community. Now, there are some opportunities that we need to look at going forward. Key to this is policy. There’s a need for many countries and many places around the world to develop policy and regulation that recognizes these new models or complementary access solutions like community networks because it empowers them to be able to go and engage with licensed operators, to be able to connect to the rest of the Internet. They need to be able to build infrastructure, so they need rights of way. This makes it much easier. And in some situations, access to financial services, once they are recognized, it’s much easier for banks and other financial institutions to pay attention to these solutions and give them funding. Of course, access to funding is a critical issue, and right now there are emerging conversations around how the use of Universal Service Fund can help support the initial deployment and extension of complementary access solutions. Backhole still remains to be a challenge. We are very much excited about the deployment of new technologies like Lios because the cost and availability of backhole has been one of the big challenges that make community and complementary access solutions be less sustainable. So we are hoping that is going to change. So I’d like to conclude by saying, of course, and before I conclude, the content element is key. Most important is making the content relevant to those people that are actually getting connected for the very first time. Why should they spend money to get connected when they have low incomes? And so they have to draw a balance between the money they have to spend on other things and money that they have to use for access. So as I conclude, at the Internet Society, we are very keen to see information about developing and deploying community networks and other complementary access solutions becoming common knowledge. In the very same way that it’s common knowledge for many communities around the world who don’t have access to potable water to know that they can collaborate and find partners to support them in digging a borehole to get access to the water. In our experience, and we are working collaboratively with other partners, we are planning to produce a do-it-yourself toolkit in 2024 that is based on all the experience we’ve gained in the deployments that we’ve had across the world. And we are hoping that that will sort of help anyone who wants to support or living in a community to know the path to take to build, own, and operate an Internet infrastructure and connect it with other networks for the benefits of those communities that are yet to be connected. Thank you very much.

Rose Payne:
Thank you so much. So, Anika, I’m going to turn to you next. Michuki just spoke to us about the challenge that connecting people in rural areas particularly creates for traditional business models. The Global Digital Inclusion Partnership leverages multi-stakeholder partnerships as a way to kind of deal with this. Can you talk to us a little bit about the role that those partnerships play?

Onica Makwakwa:
Certainly. Good afternoon, and thank you so much for hosting this important conversation. So, Global Digital Inclusion Partnership, we work on policy and regulatory frameworks to advance meaningful connectivity for the global majority specifically. And we do this by bringing together different stakeholders at national level as well as at regional and global levels to bring government as well as private sector and civil society to begin to look at an opportunity for building this. So, as an example, we obviously use research to inform the policy frameworks that we want to gain support for. So, I’ll give you an example with the 2% target that the first speaker spoke about on affordability in terms of how that was built. And it basically is a target that is one for two, meaning that one gig of data should not cost more than 2% average monthly income. And that was actually informed through research that was done in 2012 and 2013. In 2014, were able to get multiple stakeholders in Ghana and Nigeria to actually get their governments to endorse this as a minimum standard even before ITU adopted this as a minimum standard. So that’s sort of been our model. But apart from that, I just want to just divert a little bit from how we get to that to talk a little bit about why it’s important for us to be inclusive even in our solutions. Because multi-stakeholderism is about making sure that everyone has an opportunity to contribute and have a voice in this. One of the missing conversations in the digital divides conversations we’ve been having has been the economic impact of leaving certain stakeholders behind in our digital development. And I’m going to highlight in particular women. What is it costing governments and economies to leave women behind? A simplistic view is for those of you who love soccer, let’s say rugby because we’re in the World Cup and I was very disappointed a few days ago that I couldn’t find anyone willing to watch the game with me a couple of days ago. But rugby World Cup, who’s your favorite team? Say South Africa, please. We’re the best. Okay. So your favorite soccer team, rugby team. If we were to take half of the team and bench it, could you still win? Doesn’t matter whether it’s your best players or worst players, just half the team bench it, your chances of winning are actually nonexistent at that moment. That’s actually what is happening with us not paying attention to the digital gender gap that every conversation we’ve had here at IGF has told us it’s actually growing. So there is an economic impact. actually done a study working with different communities to look at what is the economic impact of leaving women behind. So the digital gender gap is actually estimated to be costing a trillion dollars in GDP over 10 years in about 32 of the low and middle income countries that we studied. In 2020, this represents $126 billion lost and about $24 billion in tax revenue that is lost. You know, when a woman is unable to use the web to get an education, to access healthcare, to build networks, she has fewer opportunities and everyone pays a price for that. So I think it’s really critical for us as we continue to talk about closing the gaps to really also think about how are we being inclusive even in that process. One of the common sayings I love from civil society in South Africa is nothing for us without us. The importance of engaging those stakeholders when we are building for them. And I think, you know, we’ve had examples and good hard knock lessons, the digital centers that were built because we thought people in the rural areas needed digital centers. And so we applied to build it and there will come approach and we built and the digital centers are now just kind of sitting there and not being heavily utilized. And I think now we are beginning to slowly start talking about this usage gap because while infrastructure is important and we still have a long ways to go, addressing the demand side issues is equally important because we want people to connect to the internet to do what? It’s not for the vibes. We want them to connect to the internet so that they’re able to use it in a way that they can help transform their lives. One of my favorite stories was a young boy who was a beneficiary of a public Wi-Fi program in the city of Twane in South Africa, who when he was interviewed, why it was important for him to walk so far to the nearest hotspots to connect, his response was that he lives in a shack and when he’s online, he no longer lives in a shack. Let that sink in. We have an opportunity and a tool to really empower people in a way that’s transformative. The people who need this kind of transformation in their lives the most are the 2.6 billion people who are actually offline at the moment. So I don’t wanna be too preachy about this, but where do we go from here and how do we build forward? I think the biggest thing we have to come together and agree on both private sector, government, and civil society is that we need to raise the bar on affordability. We’ve been working on affordability since 2014 for some of us and others much longer. You saw the graph showing how many countries have actually achieved affordability. And we are talking about 2%. But let me remind you of something that’s even said about this 2%. We are talking about 2% for one gig per month. What meaningful activity can any of us do being able to only afford one gig per month for connectivity? Certainly not the things that we’re required to do during the lockdowns of COVID. You cannot take a course and actually finish with one gig per month. You cannot attend meetings. There’s just so much that cannot be done. So we need to raise the bar from a policy point of view on this standard for affordability. And look at the meaningful connectivity standard, which tells us that we need to aim for people being able to have daily use of the internet if they want to. At the moment, we are still defining a connected person as someone who accesses the internet once every three months. So we are doing all of this, but our standards are so, so low. And we are calculating the gaps based on very low standards, which means that the picture is actually even worse than it really is if we were to use a meaningful connectivity standard. So daily use and limited access to data is a standard we should be pushing for. Adequate skills for connectivity, investing in the skills. A minimum standard on a device that is suitable for meaningful connectivity, a smart device. Let’s just say that, we won’t say phone, but a smart device that is affordable. One of the studies we also did was device affordability. In the continent of Africa, we are still spending 40 to 60% average household income on one device. So device affordability is a big issue. A lot of the solutions that have been introduced are financing of devices. That is not affordability. It’s, you can’t afford it now, but over six months you will afford it. We need people to be able to afford, so we need to really work around innovative solutions, whether it’s local assembly, or reducing of digital taxation on the devices themselves as a way to really spare uptake of digital technologies. And skills I’ve already mentioned, and an adequate speed, at least 4G level. If we truly are talking about people being able to do the things that Joe was enticing us about, the content that is dynamic and vibrant, we need to admit that we have to raise the bar on the standards, and this requires policy approach for us. And lastly, we have to mainstream gender in ICT policies. It is not acceptable that we continue to have a gap that is growing, and how do we do that? We do that by a framework that was actually developed by women called the REACT Framework. We need a rights-based approach towards ICT development. Rights-based approach because women experience violence online. There’s safety issues, there’s privacy issues, and some of the gap is caused, yes, by lack of affordability, but also by women censoring themselves out of participating because of the experience that they have when they connect online. We need to invest in education for the digital skills so that everyone is able to truly participate at minimum level, and maybe define what is a digitally literate person, the same way we did with the ABCs education. Digital education, what is a digitally literate person, and how do we define that, and how do we work towards achieving that? We need to really double down on access. I joked the other day saying that the running theme here at IGF is we are running out of time, but usually that’s because the sessions are always running late. We are running out of time, but we truly are running out of time. You saw the SDGs and the schedule. We are running out of time of connecting everyone. We are running out of time of making sure that we reach universal access. Over $400 billion is estimated for infrastructure that’s still needed for infrastructure investment to connect everyone. Even if private sector could put up half of that, we still need government to prioritize investment in digital development and infrastructure in particular. So it’s going to take the public-private partnerships, it’s going to take everyone actually contributing to this. So access and affordability. And affordability, we have to also admit that there may be communities that might never be able to afford even at the 2%, and so initiatives like the one that the previous speaker was talking about of community networks are really important. We have to be open to different digital models as well as different financial models for connecting the unconnected. We have to focus on content, so that’s part of the REAC, R-E-A-C, so content for C. content, but content in languages that people can understand and can operate in. Like, seriously, we don’t want to connect everyone for them to read English online. Most of the content right now is in English. I come from a continent where majority of the population does not speak English. So content is really important and it has an economic opportunity as well because content economy is quite a thing, right? We don’t want to only just be consumers online. We also want to be innovators and produce and develop content as well. And last, the T in the REACT framework is about setting targets. We can’t measure something that we did not set a target, we did not benchmark. We need to set targets and be intentional about closing, especially the digital gender gap. It exists because of the inequalities that already exist in our society. It’s not just a divide that is an online or an ICT divide. So we have to be intentional about fixing those. So a good starting point is looking at national broadband plans and seeing if they specifically address any gender issues. Quite a number of broadband plans are still quite mute on gender or women or just actually going the extra mile to ensure that women are included in the digital economy that we all are talking about now. So I will just pause there to say REACT, rights-based education, access and affordability, content and setting realistic policy targets to actually connect those who are not connected is where we need to be.

Rose Payne:
Thank you so much. So next we have Chiakomo who’s joining us online. I hope that he can unmute himself. Onika, we’ve just heard quite a lot about the gender digital divide and the fact that when people go online they may not feel safe. And that may have a chilling effect. And the other aspect of this is also security online. If they don’t feel like the online space is secure, they may just not engage with it. Chiakomo, I would really love for you to speak a little bit about how we make sure that a lack of security doesn’t have that kind of chilling effect.

Giacomo Persi Paoli:
Thank you. Thank you very much. I hope you can hear me. I’ll assume you can unless you tell me otherwise. Thank you so much to the ICC for inviting me. It’s really an honor and a pleasure to be here today and speak on this great panel. It’s always challenging to be the last speaker because I had prepared a bunch of notes and, you know, as speakers were, you know, previous panels were presenting I had to kind of delete points to avoid repetitions. But I still hope that I can add some some value here. And I actually want to go back to to where we started if you want to tackle your question about the links between connectivity, digital technology and and the SDGs. And I will start with, you know, a quote by the UN Deputy Secretary General that at the opening of the Sustainable Development Goals Digital Day in New York about a month ago. Said that digital technologies when used safely and responsibly can be catalysts for economic, social and societal transformation by creating efficiencies at scale and expanding the reach of existing solutions to support more people. Now, I find this very a very interesting quote because we’ve heard so many times how digital technologies can have this catalyst effect that can be accelerators for SDGs. But this was the first time that I saw so clearly and so explicitly a reference to responsible and safe use. And what does, you know, responsibly and safe and safe actually mean in this context. First and foremost, we should take this as applicable to all stakeholders that are part of the connectivity ecosystem, from users to companies to governments. This is really a shared responsibility, and everyone should really take its own part of the bargain here and make sure that they deliver. But what does actually being responsible and being safe mean? Going back to something that I believe Pablo mentioned earlier, it’s about earning and building digital trust. People have to trust in order to connect. They have to trust the technology itself, they have to trust the companies that are behind the technology, and they also have to trust the governments that in their roles both as regulators and as service providers have to create a safety net, a protection net around users. Without this combined effect, it’s going to be very difficult to, in a way, build that trust that is needed and kind of mitigate the chilling effect. Because the chilling effect can occur in two ways. It can occur if users and people feel unsafe and unprotected on the internet, but it can also be on the other side of the coin. They might not be willing to connect if they feel that there is an abuse or a misuse of the powers of companies or of governments that in a way take, in this asymmetric kind of power distribution, they may perceive, users may perceive threats to respect of their own rights. They’re uncertain about what’s going to happen to their personal information and data. So security is a very important point, but it’s also a very delicate one that needs to really be taken into consideration from the beginning. I want to focus on, because we’ve heard about the importance of affordability, of access, of inclusivity, and reducing the gap. I want to add another important parameter here, which is the parameter of preparedness. So, I believe that it is very important, if we focus on the role of governments, that governments not only invest in ensuring the connectivity, so the ability of people to connect, but they do so without overlooking, or without taking shortcuts when it comes to really developing the preparedness of the whole system, to be able to absorb this innovation and increased connectivity. I just want to reference, there was a recent report by the Economic Commission for Africa that highlighted how, relatively speaking, for the African continent, the low level of preparedness in cybersecurity costed about 10% of national GDP. So, that is a significant number, if you think that, on one hand, connectivity can boost social and economic development, but, on the other hand, the lack of preparedness, the lack of an appropriate cybersecurity ecosystem at the national level can actually slow down, or potentially even reverse, the positive effect that connectivity and digital technologies can actually bring. So, what does preparedness actually mean in this context? I think that we have to look holistically at the whole of government, like it was mentioned at the very beginning, and that’s why I said I would like to finish from where we started. All of government approach is important. It means, fundamentally, intervening at, I would say, five different levels. The policy and regulatory part is key. Governments need to have policies and regulations that enable and support innovation. same time create the safety and security regulatory ecosystem that allows the responsible and safe use of these technologies. The other layer that follows policy and regulation is a layer of processes, of operations. We’ve heard how public-private partnerships are incredibly powerful tools that can be leveraged to really boost connectivity and development and absorption of these digital technologies. These public-private partnerships have to be structured, they have to be in a way put within a framework that allows them to work. You cannot wait until you need a public-private partnership to work, to worry about establishing the frameworks that allow you to start with to have these partnerships. Structures are also an important layer, so we’ll go back to safety and security. You cannot really think in 2023 to invest heavily in boosting your connectivity if you’re not equally ready to invest in building your own capacity to deal with incidents and emergencies that happen in the digital domain. So being able to build computer emergency response teams or computer incident response teams that can work at the public level, can work in cooperation with private sectors, etc. It’s key, so you really need to invest in building these structures that ultimately deliver that feeling of safety and security to individuals. If you’re asking me to trust e-banking because it’s safer than traditional banking and you’re asking me to put all of my savings online, I want to be sure that those savings are protected from criminals. malicious actors that maybe want to take advantage of me, of my perhaps lack of skills or knowledge. But I also want to make sure that there is someone that I can rely on that protects that critical infrastructure and critical services. And then, of course, it was mentioned skills. Skills is important. Globally, there is a shortage of cyber skills and cyber security skills, so it’s going to be hard. But nevertheless, it is important that significant investment is done in skills, not only to enable connectivity and teach and develop the skills that would allow people to meaningfully engage with services and content online, but there is also the need to invest on developing basic digital security or cyber hygiene skills to really, through public campaigns, through education already in schools, making sure that we invest in building that foundational knowledge that would actually enable people to safely and securely engage online. And last but not least, it’s technology. Now, technology has always been considered as a potentially kind of a high barrier for some developing countries and the idea that it would put a lot of pressure on them to be able to equip themselves with the technological solutions that are needed to protect and monitor and keep the digital environment safe. However, it is important that we do not let the challenges, in a way, prevent us from even engaging on the discussion of how important it is that potentially, by leveraging some public-private partnerships, governments equip themselves with the technological capabilities to be able to protect and monitor and protect their own networks. Because digital connectivity is fundamental, is key, can have so many positive effects, but it also, in a way, expands significantly the attack surface or the entry points for malicious actors to really target individuals or societies at large. So it is really important that we do not consider security as a cost line that needs to be minimized, but it’s actually an asset. It’s an investment that ultimately we need really to take seriously in order for it to pay dividends and make sure that people and institutions and companies can enjoy the benefits of an open, safe and secure online environment. Thank you.

Rose Payne:
Thank you so much. So I’m just going to let the panellists know, I think we’re a little bit behind our schedule, so I’m going to deviate from the plan and actually begin straight away with the Q&A. I’m going to cheat a little bit and take advantage of my role as moderator to ask the first question. So I’d like to kind of go back to Giacomo’s point about structuring partnerships. Every speaker today has agreed that one way or another, we can’t work in silos, we need to work together. Every stakeholder, be they government or private sector, has a role to play. The whole-of-society approach, ecosystem approach. Can you talk a little bit more about the way that we need to structure those roles and structure those partnerships? And I think I will hand the mic to you first of all, Takashi.

Takashi Motohisa :
Thanks. Actually, I would like to talk about Kuiper. We are paying a lot of efforts for the successful project launch and the service start in the near future. Amazon is investing $10 billion. The number is a little bit smaller than $400 billion. But we are investing a lot. But we believe that we will be more successful if we can get more partnership. from the both of the private sector and the public sector. We are perfectly committed to working with the partner of those who can share our common concept of bridging the digital divide. So we believe, we think partnership will take many variety of forms. For example, for the private sector, it can be the partnership with the wireless carrier to extend its LTE and 5G networks to the new regions. It is one of the form of the partnership. Actually, last month, we announced the partnership with the Vodafone and Vodacom. They are going to use Kyber services to extend their network in the region of Europe and Africa. And yeah, we are very excited to see how the partnership can improve the network in those regions. And we are looking forward to, of course, partnering with others. And then for the public sector, yeah, our ability to connect to customers requires access to the cable and radio frequency and regulations that enable modern satellite technology. We expect public sector support in both of country level and international level. From the spectrum access and the necessary licenses in each country to the international radio regulations update, which can enable modern satellite, like a customer, to fulfill its potential ability to bridge the digital divide. Thank you.

Rose Payne:
Thank you so much for that. I’m going to ask if anyone has a question, to please come forward to the mic. And we’ll actually start with one online. So I have in mind someone to answer this. But please, after I ask that person, any of the panelists should jump in. So the question was, when we’re talking about skills to engage with digital technologies, what skills do we mean? Joe, I think that it would be great if you could give the opening answer to that. Sure, I’ll jump in on that. But I’ll turn my mic on first.

Joe Welch:
Yeah, I mean, I. forgive me for defaulting to just giving a little bit of a what you know what we do answer to that which is well we’ll enter any given market with the service that Disney Plus mentioned and we’ll just try and be good partners as we as we come in with the service we’ll partner with the in-country telcos be a good partner to the creative industry the part of the ecosystem with the policymakers and just so that we have the reputation of like all these these guys are part of the fabric of our of our community that’s good for our business and it’s and it’s good for the for the country in question and then we try and do a little more on top of that that’s just like the threshold so the more part would be to actually do projects so we’ll fund an NGO or work with the government to do we’ve got projects in 20 different countries and we’ll do digital literacy projects or online safety projects with the NGO or with government so I’ll give you two examples one would be in Indonesia we partner with a firm called NGO called ganara and they go around to schools and they train that they work with the kids in the classroom to do the digital literacy so the the bullying and issues like that and they do it through art like pen on paper you know old-school art as a way to bring the Indonesian children up the curve on digital digital literacy matters and then a related one for us is in Latin America where we have Chico net which works across the region training the teachers to then do the similar thing as I described for Indonesia so those are two two different approaches that we take that’s responsive thank you

Rose Payne:
that’s fantastic would anyone else either in the room or online to pick up on this question just as a reminder that was when we’re talking about skills to engage with digital technologies what skills do we mean

Onica Makwakwa:
let me just humanize that question a little bit and share what we have learned about what people do online when they have the right connectivity the speed and then the know-how to to utilize these and this is all part of our course of exclusion study where we took several countries and we did qualitative research and ethnographic studies to really dig deep and humanize the economic impact of the exclusion so this is from the women who are online who were included and what we learned in West Africans in particular the women were using the Internet three times you know the Internet for every three men that reason the Internet only two women were but those who were these are the things that are accomplishing they were able to utilize the Internet and the reports that during kovat 19 they did not lose income because they were able to convert their mic their business of selling on at the market by being able to create our whatsapp groups and sell their goods online as well those were online reported seven times more than those who are not online having completed an upgrade upgraded their skills by taking a course online and being able to improve their job skills by being able to be connected and a lot of the uses were really connected to their financial ability to be able to take care of themselves and their families so I think you know I’m gonna just answer it from that point of view to say yes it’s about digital skills but maybe we need a broader conversation around is that coding is that skills to be able to operate mobile money is that a skill to be able to run your business online but you know with the right device the right connectivity and the skills what the theme we are seeing is that women in particular are able to preserve economic an ability to end economically in during periods of crisis such as the one we’ve recently had with the pandemic great thank you so much so

Rose Payne:
I’m aware that we’ve got about 10 minutes left only just under so I want to give everyone the chance to kind of make a final a final statement because I kind of cut you off from your previous content which uses to share so I’m actually going to start with at Sukho if that’s okay and if I can request that people keep their comment to about one minute that would be fantastic thank you

Atsuko Okuda:
perhaps I can combine my contribution to answer the skills question as well as the closing remarks and I think this question on what skills is very important I earlier mentioned about connecting the unconnected but in fact the days are very dynamic and fast moving of technological development such as AI and data analytics and it’s going to affect these connected and unconnected communities alike so the knowledge and skills perhaps we need to address the reality in front of us would be different from what we anticipated let’s say five years ago just very quickly give you an example there are already jobs which maybe may become redundant or the jobs that may be created and without the necessary a new digital skills perhaps the people would have tough time so I believe that we would need to revisit what are the digital literacy and digital skills that will be required in the new, new, new normal that we are seeing in front of us. And I hope that that includes the AI solutions as well as the need for data and data intensive decision making which we are surrounded with in e-commerce or the traffic management or asset management and the mobile banking as earlier a speaker said. So I hope that this will leave a question and the cautious optimism that perhaps together in partnership we can address in and move forward. Thank you. Back to you.

Rose Payne:
Thank you so much. So next I’m going to go to you, Takashi.

Takashi Motohisa :
Yep. Thank you. Maybe, yeah, I can state that, yeah, we are going forward to bridge the digital divide strongly. Yeah, just we will do that. Fantastic. Thank you.

Rose Payne:
Next on the list, I have Pablo.

Pablo Barrionuevo:
Yes. Thank you, Rose. Well, I’d like to congratulate all the panelists and you, Rose, because we have identified everything. We have all the ingredients on the table. And for example, the technologies, we now have the technologies to connect everybody. It is not a problem of technology. We’ve mentioned a structure. I think that it is maybe not the thing. I mean, we have to understand that what works somewhere may not work somewhere else. And maybe that is the thing. We have to find the flexibility to find the correct solution for the correct place. And maybe this is the case. We have all the ingredients. And what is maybe the common ground? The common thing is that we have to work all together to connect the unconnected. And this is the solution. We have the ingredients. We have the reside. And what we have to do is to work all together to connect the unconnected. That’s the thing, in my opinion. Thank you.

Rose Payne:
Thank you so much. So, Joe. Yeah, thanks. I’ll end with an affirmation of the multilateral,

Joe Welch:
multistakeholder approach. 9,000 people here this week and dealing with existing issues and new ones like AI. And it’s my first. And it’s been amazing. And it’s a real treat. And then this panel, academia, civil society, industry, different parts of industry. history, coming together on an important issue like this. And I learned a famous South African quote today that I’ll use in future presentations, which is, nothing for us without us.

Rose Payne:
Thank you. Machuke, can I ask you to come in next?

Michuki Mwangi :
Yes, thank you. I think, in conclusion, as Pablo has rightfully noted, the solutions are there. Now we need to go to the next step. And the next step is that we need to scale up the efforts, meaning that we need to be able to scale these up to have the impact that it needs to have. And that’s going to be possible by increasing the funding that’s available towards the deployments of the solutions that have been identified. It’s not the technology. It’s about getting more people connected. And so that needs to start happening. We also need to increase the partnerships, because it’s an essential component, an ingredient for the success, as almost every other panelist here has rightfully noted, that we cannot individually do it alone, and it needs a lot of collaboration and partnerships to be able to achieve that goal. And so if we’re able to scale, make sure that we get the funding, the partnerships, I believe that it’s possible to achieve the 2030 vision of having everyone connected. And we, as the Internet Society, are pretty much here to support and collaborate with everyone to help achieve that goal. Thank you very much.

Rose Payne:
Thank you so much. I’m now going to ask Annika to give her summary.

Onica Makwakwa:
So the gaps that exist today are actually a consequence of policy choices that we either make or don’t make. So I’m going to challenge us that we must work with policymakers to look at narrowing the gaps. And one that we didn’t talk much about is rural v. urban divide. So engaging rural communities in the broadband policy agenda to make sure that we are not leaving people in rural and remote areas behind. Embedding meaningful connectivity indicators with key ICT statistics so that we are going beyond basic access to actually make sure that we are measuring based on meaningful connectivity. And lastly, we need to leverage public access solutions in order to provide affordable and meaningful resources to rural and remote areas in particular. Thank you.

Rose Payne:
Great. Thank you. And then, Giacomo, can I ask for your summary now?

Giacomo Persi Paoli:
Absolutely. And I’ll be very brief. I think it is important that we consider connectivity not as the end of the journey or the point of arrival, but a point of departure and a new beginning. And to do that, it is important that we invest in preparedness. Again, stressing the importance of being prepared to what connectivity is actually going to mean and bring to society. And skills, as was mentioned, it’s a big part of that. I don’t think there is a single answer to what are the right skills, depending which community you’re talking about. Those needs will be different, but definitely basic cyber regime skills for all users are going to be needed. We’re going to have potentially 2.4, 2.6 billion new people connected that were not connected before. And these people have to be upskilled to make sure that they do so safely and responsibly. But at the same time, governments have to improve their digital skills and their digital knowledge skills so that they can engage with other governments on par. And really, skills is a very complex effort that should be taken forward as a key pillar of preparedness. Thank you.

Rose Payne:
Great. Thank you so much. I think it would be impossible to summarize everything, but just to kind of quickly run through the last takeaways. I think that the message for all of us is that we need to go fast, be flexible. We need to defend and uphold the multi-stakeholder model. We need better indicators, and we need to focus on rural areas. And then there was also a bit of a challenge to governments as well, because obviously skills is something that can be delivered by many different people, but particularly it would be great to see a focus on cyber hygiene, I think was another thing that came up. And then finally, it’s clear that really what everyone is asking for is the whole ecosystem approach for everyone to move together. So I guess that that’s a bit of a call of action to all of you. Thank you so much to all of our panelists today. That was fantastic. And thank you to everyone in the audience for participating. Thank you.

Pablo Barrionuevo

Speech speed

126 words per minute

Speech length

710 words

Speech time

337 secs

Atsuko Okuda

Speech speed

145 words per minute

Speech length

1710 words

Speech time

707 secs

Audience

Speech speed

150 words per minute

Speech length

32 words

Speech time

13 secs

Giacomo Persi Paoli

Speech speed

143 words per minute

Speech length

1788 words

Speech time

750 secs

Joe Welch

Speech speed

152 words per minute

Speech length

1306 words

Speech time

517 secs

Michuki Mwangi

Speech speed

162 words per minute

Speech length

1667 words

Speech time

619 secs

Onica Makwakwa

Speech speed

160 words per minute

Speech length

2760 words

Speech time

1035 secs

Rose Payne

Speech speed

164 words per minute

Speech length

2044 words

Speech time

748 secs

Takashi Motohisa

Speech speed

101 words per minute

Speech length

922 words

Speech time

548 secs

AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

Countries around the world are facing significant challenges in implementing artificial intelligence (AI) due to variations in democratic processes and understanding of ethical practices. The differences in governance structures and ethical frameworks make it difficult for countries with non-democratic processes to effectively grasp and navigate the complexities of AI ethics. Even in relatively democratic countries like the Netherlands, issues arise due to these disparities.

Furthermore, many countries are hastily rushing to implement AI without giving due consideration to important factors such as data quality, data collection, and data protection and privacy laws. The focus seems to be on implementing AI algorithms without laying down the necessary core elements required for a successful transition to AI-driven systems. This is a cause for concern, particularly in most countries in the global south where data protection and privacy laws are often inadequate.

The lack of adequate data quality and collection mechanisms, coupled with inadequate data protection and privacy laws, raises serious concerns about the safety and integrity of AI systems. Without proper measures in place, there is a risk of bias, discrimination, and potential misuse of data, which can have far-reaching consequences for individuals and societies.

In order to address these challenges, governments must recognize the need to ensure that their technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge. The rapid advances in AI capabilities require a proactive approach in developing the necessary infrastructure and upskilling the workforce to keep up with the evolving technology.

In conclusion, the implementation of AI is hindered by variations in democratic processes and understanding of ethical practices among countries. Rushing into AI implementation without addressing critical issues such as data quality and protection can lead to significant problems, particularly in countries with insufficient data protection and privacy laws. Governments play a crucial role in fostering appropriate technical infrastructure and developing the necessary skills to effectively navigate the challenges posed by AI technologies.

Jingbo Huang

Jingbo Huang places significant emphasis on the importance of collective intelligence in both human-to-human and human-to-machine interactions. He recognizes the potential for artificial intelligence (AI) and human intelligence to work in unison to tackle challenges, highlighting the positive aspects of this partnership rather than focusing solely on the negatives. Huang emphasizes the need for collaboration and preparation among human entities to ensure the integration of AI into society benefits all parties involved.

Huang further expresses curiosity about the collaboration between different AI assessment tools developed by various organizations. Specifically, he mentions the UNDP’s AI readiness assessment tool and raises questions about how it aligns or interacts with tools developed by the OECD, Singapore, Africa, and others. This indicates Huang’s interest in exploring potential synergies and knowledge-sharing among these assessment tools.

Additionally, Huang demonstrates an interest in understanding the challenges faced by panelists during AI conceptualization and implementation. Although specific supporting facts are not provided, this suggests Huang’s desire to explore the obstacles encountered in bringing AI projects to fruition. By examining these challenges, he aims to acquire knowledge that can help overcome barriers and facilitate the successful integration of AI into various industry sectors.

In summary, Jingbo Huang underscores the significance of collective intelligence, both within human-to-human interactions and between human and machine intelligence. Huang envisions a collaborative approach that leverages the strengths of both AI and human intelligence to address challenges. He also shows a keen interest in exploring how different AI assessment tools can work together, seeking to identify potential synergies and compatibility. Moreover, he expresses curiosity about the challenges faced during the AI conceptualization and implementation process. These insights reflect Huang’s commitment to fostering mutual understanding, collaboration, and effective utilization of AI technologies.

Denise Wong

Singapore has taken a human-centric and inclusive approach to AI governance, prioritising digital readiness and adoption within communities. This policy aims to ensure that the benefits of AI are accessible and beneficial to all members of society. The model governance framework developed by Singapore aligns with OECD principles, demonstrating their commitment to ethical and responsible AI practices.

In adopting a multi-stakeholder approach, Singapore has sought input from a diverse range of companies, both domestic and international. They have collaborated with the World Economic Forum Center for the Fourth Industrial Revolution for ISAGO (Intentional Standards for AI Governance Organizations) and have worked with a local company to write a discussion paper on Gen-AI. This inclusive approach allows for a variety of perspectives and fosters collaboration between different stakeholders in the development of AI governance.

Practical guidance is a priority for Singapore in AI governance. They have created a compendium of use cases that serves as a reference for both local and international organisations. Additionally, they have developed ISAGO, an implementation and self-assessment guide for companies to ensure that they adhere to best practices in AI governance. Furthermore, Singapore has established the AI Verify Foundation, an open-source foundation that provides an AI toolkit to assist organisations in implementing AI in a responsible manner.

Singapore recognises the importance of international alignment and interoperability in AI governance. They encourage alignment with international organisations and other governments and advocate for an open industry focus on critical emerging technologies. Singapore believes that future conversations in AI governance will revolve around international technical standards and benchmarking, which will facilitate cooperation and harmonisation of AI practices globally.

However, concerns are raised about the fragmentation of global laws surrounding AI; compliance costs can increase when laws are fragmented, which could hinder the development and adoption of AI technologies. Singapore acknowledges the need for a unified framework and harmonised regulations to mitigate these challenges.

Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening divides and negatively affecting vulnerable groups. Singapore, being a highly connected society, is aware of the possibility of certain groups being left behind. Bridging these divides and ensuring that technology is inclusive and addresses the needs of vulnerable populations is a priority in their AI governance efforts.

Cultural and ethnic sensitivities in conjunction with black box technology are also a concern. It is unpredictable whether technology will fragment or unify communities, particularly in terms of ethnic and cultural sensitivities. Singapore acknowledges the importance of considering a culturally specific perspective to understand the potential impacts of AI better.

In conclusion, Singapore’s approach to AI governance encompasses human-centricity, inclusivity, and practical guidance. Their multi-stakeholder approach ensures a diversity of perspectives, and they prioritise international alignment and interoperability in AI governance. While concerns exist regarding the fragmentation of global laws and the potential negative impacts on vulnerable groups and cultural sensitivities, Singapore actively addresses these issues to create an ethical and responsible AI ecosystem.

Dr. Romesh Ranawana

Sri Lanka is currently facing challenges in terms of its AI readiness and capacity, which puts it behind many other countries in this field. The country has just begun its journey towards improving AI readiness and it lags behind in terms of both readiness and capacity.

However, the government of Sri Lanka has recognised the importance of AI development and has taken the initiative to develop a national AI policy and strategy. This is expected to be rolled out in November and April 2024 respectively. The government understands that engagement in AI development should not be limited to the private sector or select universities, but it needs to be a national initiative involving various stakeholders.

Currently, AI projects in Sri Lanka face challenges in terms of their implementation. Although over 300 AI projects were conducted by university students in the country last year, none of them went into production. The proposed AI projects in Sri Lanka often do not progress beyond the conceptual stage. This highlights the need for better infrastructure and support to bring these projects to fruition.

One of the primary obstacles to AI advancement in Sri Lanka is the lack of standardized and digitized data. Data is often siloed and still available in paper format, making it difficult to utilize it effectively for AI applications. This challenge is not just technical but also operational, requiring a change in mindsets, awareness, and trust. Efforts to develop AI projects are being wasted due to the absence of consolidated data sets that address national problems.

In order to overcome these challenges, Sri Lanka aims to establish a sustainable, inclusive, and open digital ecosystem. The United Nations Development Programme (UNDP) is working on an AI readiness assessment for Sri Lanka. This assessment will help identify areas that need improvement and provide recommendations to establish an ecosystem that fosters AI development.

In conclusion, Sri Lanka is in the early stages of improving its AI readiness and capacity. The government is taking an active role in formulating a national AI policy and strategy. However, there are challenges in terms of implementing AI projects, primarily due to the lack of standardized and digitized data. Efforts are being made to address these challenges and establish a sustainable digital ecosystem that supports AI development.

Alison Gillwald

In Africa, achieving digital readiness for artificial intelligence (AI) poses significant challenges due to several fundamental obstacles. Limited access to the internet is a major barrier, with many countries in Africa having 95% broadband coverage, but less than 20% of the population experiencing the network effects of being online. This indicates that the lack of internet connectivity severely hampers the potential benefits of AI. Additionally, the high cost of devices is a crucial factor preventing a large portion of the population from acquiring the necessary technology to access the internet and engage with AI applications. Moreover, rural location is a greater hindrance to access than gender, further exacerbating the digital divide in Africa.

Education emerges as a key driver of digital readiness and the ability to absorb AI applications in Africa. Access to education directly impacts individuals’ affordability of devices, thereby influencing their ability to engage with AI technology. Consequently, investing in education is crucial for enhancing digital readiness and facilitating successful AI adoption in Africa.

The African Union Data Policy Framework plays a critical role in creating an enabling environment for AI in Africa. The framework recognizes the significance of digital infrastructure in supporting the African continental free trade area and provides countries with a clear action plan alignment and implementation support. This framework aims to overcome the challenges faced in achieving digital readiness for AI in Africa.

Addressing data governance challenges and managing the implications of AI require global cooperation. Currently, 90% of the data extracted from Africa goes to big tech companies abroad, necessitating the development of global governance frameworks to effectively manage digital public goods. Collaboration on an international scale is essential to ensure that data governance supports AI development while protecting the interests and sovereignty of African nations.

Structural inequalities pose a significant challenge to equal AI implementation. When AI blueprints from countries with different political economies are implemented in other societies, inequalities are deepened, leading to the perpetuation of inequitable outcomes. Ethical concerns surrounding AI are also raised, highlighting the role played by major tech companies, particularly those rooted in the world’s most prominent democracies. Ethical challenges arise from these companies’ actions and policies, which have far-reaching implications for AI development.

An additional concern is the presence of bias and discrimination in AI algorithms due to the absence of digitization in some countries. In certain nations, such as Sri Lanka, where there is a lack of full digitization, people remain offline, resulting in their invisibility, underrepresentation, and discrimination in AI algorithms. This highlights the inherent limitations of AI datasets in being truly unbiased and inclusive, as they rely on digitized data that may exclude significant portions of the global population.

In conclusion, African countries face several challenges in achieving digital readiness for AI, including limited internet access, high device costs, and rural location constraints. Education plays a crucial role in enhancing digital readiness, while the African Union Data Policy Framework provides an important foundation for creating an enabling environment. Addressing data governance challenges and managing the implications of AI require global cooperation and collaboration. Structural inequalities and ethical concerns pose significant risks to the equitable implementation of AI. Additionally, the absence of digitization in some countries leads to bias and discrimination in AI algorithms.

Alain Ndayishimiye

AI has the potential to have a profound impact on societies, but it requires responsible and transparent practices to ensure its successful integration and development. Rwanda is actively harnessing the power of AI to advance its social and economic goals. The country aims to become an upper middle-income nation by 2035 and a high-income country by 2050, relying heavily on AI technologies.

Rwanda’s national AI policy is considered a beacon of responsible and inclusive AI. This policy serves as a roadmap for the country’s AI development and deployment and was developed collaboratively with various stakeholders. Through this multi-stakeholder approach, Rwanda was able to create a comprehensive and robust policy framework that supports responsible AI practices.

One key benefit of the multi-stakeholder approach in developing Rwanda’s AI policy is the promotion of knowledge sharing and capacity building. By bringing together different stakeholders, experiences and insights were shared, fostering learning and collaboration. This approach also contributed to the strengthening of local digital ecosystems, creating a supportive environment for the development and implementation of AI technologies.

However, ethical considerations remain important in the development and deployment of AI. Concerns such as biases in AI models and potential privacy breaches need to be addressed to ensure AI is used ethically and does not harm individuals or society. Additionally, the impact of AI on job displacement and potential misuse in surveillance should be carefully managed and regulated.

To further promote the responsible use of AI and create a harmonised environment, it is crucial for African countries to collaborate and harmonise their AI policies and regulations. This would allow for a unified approach when dealing with large multinational companies and help reduce the complexities of regulation. Harmonisation would also facilitate the development of shared digital infrastructure, attracting global tech giants by providing a consistent and supportive regulatory environment.

In conclusion, the transformative potential of AI for societies is significant, but responsible and transparent practices are essential in its development and deployment. Rwanda’s national AI policy serves as an example of responsible and inclusive AI, with a multi-stakeholder approach promoting knowledge sharing and capacity building. However, ethical considerations and the harmonisation of AI policies among African countries should be prioritised to ensure the successful integration and benefits of the digital economy, positioning Africa as a significant player in the global digital space.

Galia Daor

The Organisation for Economic Cooperation and Development (OECD) has been actively involved in the field of artificial intelligence (AI) since 2016. They adopted the first intergovernmental standard on AI, called the OECD AI Principles, in 2019. These principles consist of five values-based principles for all AI actors and five policy recommendations for governments and policymakers.

The five values-based principles of the OECD AI Principles focus on fairness, transparency, accountability, and human-centrality. They aim to ensure that AI systems respect human rights, promote fairness, avoid discrimination, and maintain accountability. The OECD aims to establish a global framework for responsible AI development and use.

The OECD AI Principles also provide policy recommendations to assist governments in developing national AI strategies that align with the principles. The OECD supports countries in adapting and revising their AI strategies according to these principles.

In addition, the OECD emphasizes the need for global collaboration in AI development. They believe that AI should not be controlled solely by specific companies or countries. Instead, they advocate for a global approach to maximize the potential benefits of AI and ensure equitable outcomes.

While the OECD is optimistic about the positive changes AI can bring, they express concerns about the fragmentation of AI development. They highlight the importance of cohesive efforts and coordination to avoid hindering progress through differing standards and practices.

To conclude, the OECD’s work on AI focuses on establishing a global framework for responsible AI development and use. They promote principles of fairness, transparency, and accountability and provide support to countries in implementing these principles. The OECD also emphasizes the need for global collaboration and acknowledges the potential challenges posed by fragmentation in AI development.

Robert Opp

Embracing artificial intelligence (AI) has the potential to make significant progress towards achieving the Sustainable Development Goals (SDGs), according to a report by the UN Development Programme (UNDP) and ITU. The report highlights the positive impact that digital technology, including AI, could have on 70% of the SDG targets. However, the adoption of AI varies among countries due to their differing stages of digital transformation and the challenges they face.

For instance, Sri Lanka requires a national-level initiative to build AI readiness and capacity, as building AI readiness and capacity cannot be achieved solely at the corporate or private sector level. Other countries have recognized this and have implemented national-level initiatives. UNDP is actively involved in supporting digital programming and has initiated the AI readiness process in Sri Lanka, Rwanda, and Colombia. This process aims to complement national digital transformation processes and views the government as an enabler of AI.

Challenges in implementing AI include fragmentation, financing, ensuring foundation issues are addressed, and representation and diversity. Fragmentation and foundational issues have been identified as concerns, as AI is only as good as the data it is trained on. Additionally, financing issues may hinder the effective implementation of AI, and it is crucial to ensure representation and diversity to avoid bias and promote fairness.

Advocates argue for a multi-stakeholder and human-centered approach to AI development as a method of risk management. This approach emphasizes the importance of including various worldviews and cultural relevancy in the development process.

The report also highlights the need for inclusivity and leaving no one behind in the journey towards achieving the SDGs. It champions working with indigenous communities, who represent different worldviews, to ensure that every individual has the opportunity to realize their potential.

In conclusion, AI presents a unique opportunity for human progress and the achievement of the SDGs. However, careful consideration must be given to address challenges such as fragmentation, financing, foundation issues, and representation and diversity. By adopting a multi-stakeholder and human-centered approach, AI can be harnessed effectively and inclusively to drive sustainable development and improve the lives of people worldwide.

Session transcript

Robert Opp:
So, please feel free to join us at the table. Don’t have to sit in the gallery. This is a round table after all. Peter, are you going to lurk in the corner over there or you want to join us at the table? Can I just do a check for our panelists online? Dr. Ranawana, are you there? Can you hear us? Oh, okay, now we can see you. Oh, perfect. Thank you. And we’ve got Alan, are you there? Yes. Can you unmute, please, so that we can? Oh, apologies for that. I was speaking on mute. Perfect. Yeah, good morning to you all, and good day to whatever part of the world you’ve been waiting for. Okay, great. Thank you so much. All right. We’ll get started. Still some seats at the table. Feel free to join us at the table if you wish. I think we’ll get started here. Okay. Okay. Good afternoon, everyone in Kyoto. Good morning, good afternoon, or good evening for those of you joining online. It’s great to have you all with us. This session is on AI is Coming, Are Countries Ready or Not? And this week has been full of AI-related events, and I’m grateful that you’ve still got the stamina to join us for this one. This is a discussion that we really want to bring forward on how countries in different stages of their digital transformation effort are taking the opportunity or trying to figure out the challenges around adopting artificial intelligence for the purpose of their national development process. And so looking forward to a good conversation on this. I’ll just, my name is Robert Opp, and I’m the Chief Digital Officer from the United Nations Development Program. UNDP, for those of you who are not aware, is essentially a big development arm of the UN system. We have presence in 170 countries. We work across many different thematic areas, including governance, climate, energy, resilience, gender, et cetera, all for the purpose of poverty eradication. And our work in digital really stems from that, because it is about how do we embrace the power of digital technology in a responsible and ethical way that puts people and their rights at the center of technological support for the development. So just to set a few words of context, I think, obviously, AI, especially with the advent of generative AI, has just exploded into the public consciousness around what is potentially available for countries in terms of the power of technology. And as we are in terms of the state of, let’s say, a pivotal point of history, three weeks ago we celebrated the SDG Summit. It marks the halfway point to the Sustainable Development Goals. We are not on track for the Sustainable Development Goals, unfortunately. Only 15% of the targets have actually been achieved. Some work that we, together with the International Telecommunication Union did in a report that was released called the SDG Digital Acceleration Agenda, we found that 70% of the SDG targets could actually be positively impacted with the use of technology. And I have to say, during that week of the high-level segment, a few weeks back, of the General Assembly, there was a lot of discussion around digital transformation overall, the power of technology, and particularly, like here, the interest, I might say, the buzz around artificial intelligence and what might it do. But it’s not so straightforward for countries to know what to do, where to turn, for countries who don’t have necessarily all of the foundations, who are not aware of the models out there. And so the conversation today is really about how do we, what situation are countries in now, and what might we do to support countries as they embrace AI? What can countries also do to reach out and organize themselves with the support of others? And I think it’s important to note that our view on this is really based in the opportunity. A number of discussions this week have focused on the potential negative impacts of artificial intelligence, which is correct, because there are lots of concerns. But on the positive side, when we look at this as UNDP, there is tremendous potential opportunity here to embrace AI and really make significant progress against the SDGs. And so the conversation today is about how to do that in a responsible and ethical way. But we’re going to focus a little bit more on the opportunity than the sort of doom and gloom end of humanity view, that that’s not important. But okay, so to join us today and for really kind of giving some texture to this roundtable, we’ve got a few fire starter speakers with us. And we’re very grateful to have a great mix of people that can really speak to this issue. So we have Dr. Romesh Ranawana, who’s the chairman of the National Committee to Formulate AI Policy and Strategy for Sri Lanka. That was an entity that was established by the president this year. We have joining us soon, hopefully, in the chair beside me, which is still empty, Dr. Allison Gilwald, who’s the executive director of Research ICT Africa, which is a digital policy and regulatory think tank based in South Africa. We have Denise Wong with us, who’s assistant chief executive within the Data Innovation and Protection Group at Singapore’s Infocom Media Development Authority, IMDA. We have Galia Daur, who’s a policy analyst within the Digital Economy Policy Division at OECD’s Directorate for Science, Technology, and Innovation. And we have Alain Indashamaye. I’m sorry, Alain, if I haven’t got all of the syllables of your last name in there, who’s the project lead of the Artificial Intelligence Machine Learning of the Center for the Fourth Industrial Revolution based in Rwanda. And so my plan here is that we’ll go through some initial comments from all of our speakers, and then we do want to turn this over to you as well. I’m also going to make just a couple remarks from the UNDP side and some of the work that we’re doing in this space as well before, just before we go to Q&A. But the offer to join us at the table is still open for those of you who’d like to come, because it is a roundtable. All right. With that, let’s go to our first speaker. And, you know, the setting here or the overall question is, you know, are countries ready for AI? What are you seeing on the ground? And what have the experiences been so far in building open, inclusive, trusted digital ecosystems that can support AI? And to speak first, I’m going to turn to Dr. Romesh Ranawana from Sri Lanka. Dr. Ranawana, the floor is yours.

Dr. Romesh Ranawana:
Thank you so much, Robert, and good morning, good afternoon to all. As you mentioned, Sri Lanka has just embarked on this journey of, you know, trying to improve AI readiness and bring the benefits of AI to the general population. But what we are faced here as a country with very low level of AI readiness and AI capacity is quite a gargantuan task, mainly because the AI revolution is just starting. And if you look at where other countries are, we are significantly behind, and we need to catch up to make sure that we bring the benefits of AI both to the people and our economy as well. And something that we’ve seen happening around the world over the last few years is that I think most countries have realized that building AI readiness, building AI capacity cannot be done at, you know, at the corporate level or the private sector level or by a few universities. It’s been accepted now that it’s got to be a national level initiative that needs to take this forward. And we’ve seen most of the developed countries that have formulated national AI strategies over the last few years, and most of the middle income countries as well, especially over the last two years, have formulated policies. So in Sri Lanka, what we have is a strange situation where we have lots of engineers who are capable of building AI systems. And we did a study recently where we found that just over the last year, there have been more than 300 projects in our universities conducted by university students on AI. But the problem that we have is that very few of these systems or none of these systems are actually going into production. They are stopping at the stage where it’s a proof of concept or a research paper, but it’s not really going into society and actually causing benefits. So our challenge here was how do we create an ecosystem where not only is the research done, but also for some of these benefits to be brought out into government services, into building economy, to making food production more efficient, bringing in education and things like that. Now, the challenge that we have, and we are very fortunate that the government took the initiative to set up the Presidential Task Force to look at national AI policy, and our current trajectory is to launch the policy in November, and then a strategy which will come up with the execution plan for the policy, which will come out in April 2024. But the challenge with AI is the fact that AI is a general-purpose technology. AI can affect just about any sector, from education to health sector to the national economy, government services, and as a country with limited resources, our challenge was how do we pick the battles that we want to address initially with our AI policy. We can’t do everything because our resources are limited, and this is quite a difficult task. And as the general guidelines for how we want to approach this was, we had three main pillars that we were looking at. First, what were those foundation elements that we need to put in place to build up AI readiness and AI capacity? Number two, what are those specific applications and specific areas that we need to focus on that will cause immediate impact and also impact on the medium term? And third is also set up the regulatory environment on how we are going to protect our citizens from the negative impacts of AI as well. And for this, once again, the scope is unlimited on what we can do, and we’ve been very fortunate that the UNDP stepped in and has started working on an AI readiness assessment for Sri Lanka, which will be the foundation of setting out those parameters on what we need to look at for what should be our main priorities and focus areas for the AI strategy that we are developing. So the AI readiness assessment at the moment is underway, and this AI readiness assessment will evaluate our strengths, our weaknesses, and the opportunities that lie ahead for Sri Lanka in terms of AI. So as we stride forward, our eyes are set on fostering an open and inclusive digital ecosystem that will not only withstand the shockwaves of the AI revolution, but also harness its potential for the greater good of our people. It’s not going to be an easy task. I mean, developing a policy and a strategy is one thing, but I think the key element for Sri Lanka is how we are going to execute on this and also do this in such a way that it’s sustainable, where this policy is not going to be put aside when governments change or the priorities of the government change. So that’s something that we are also looking at on how we can approach that. But really, our focus at the moment is first identifying our boundaries. What should the AI policy in Sri Lanka initially focus on? And then from there onwards, building on where we are going to go. Thank you.

Robert Opp:
Thank you so much, Dr. Ranawana. Fascinating questions to be asked, and I’m sure shared with a number of other countries. We’re going to go to our next speaker, Denise Wong of IMDA in Singapore. And Singapore has done a lot very quickly, I would say, in the AI space. And we’re aware of some of the work you’ve done in policy and governance and how you’ve really worked to include putting people at the centre, taking a human-centric approach. Could you tell us a little bit more about the approach Singapore has taken and some of the things you’ve done to make this a human-centric endeavour?

Denise Wong:
Thanks for the question, and thank you for having me. So, you’re indeed right. I think our policy has always been quite an inclusive one. As part of the national AI strategy, everything that we’re doing today has really been on the back or building upon foundations about inclusion and about high level of digital readiness and adoption within our communities. And that’s really been the bedrock for all the work that we’ve done after that. Focusing specifically on AI governance, which is the area that I work in, of course, in the area of governance and regulation, you’re always thinking about risks and potential of misuse. But I prefer not to see it only in that frame. A lot of it has been about what does AI mean and what does AI mean for the public? good in the public interest and it’s in that context that we see both opportunities and opportunities for our public at large but of course with the appropriate guardrails and safety nets and implementable guidance and thus if I sum up our approach it’s really been about being practical and having detailed guidance to to help shape norms and usage and in doing so we started off with a model governance framework fully aligned to OECD principles which was a very important to us to have the international alignment and we took a multi-stakeholder approach in developing that we also took in a fairly international approach in doing that we got feedback from more than 60 companies from different sectors both domestic and international as part of the first iteration of the model governance framework we also worked on what we call ISAGO which is an implementation and self-assessment guide for companies and that was actually done together with the World Economic Forum Center for the Fourth Industrial Revolution and that helps to provide practical alignment for companies with their governance practices with the model governance framework we also put together a compendium of use cases which contains illustrations on how local and international organizations can align and implement these practices so it was always a fairly practical approach that we took an organization centric lens and that sort of took away the sting of maybe politics or risk or existential and really just focused on what companies could do, should do at the very practical level. In the Gen-AI space I would say we’ve also been fairly so practical and industry focused. We issued a discussion paper in June focusing on Gen-AI. It was framed as a discussion paper rather than a white paper because we really wanted to generate discussion. It was an acknowledgement that we didn’t know all the answers. No one does and we wrote it together with a company in Singapore so that we had both perspectives. We’ve also launched the AI Verify Foundation in June. It’s an open source foundation. To be honest we’re also learning how to do open source foundations as we go along but that also has an AI open source toolkit not in Gen-AI space in the discriminative AI space but that was really a toolkit that we wanted to build and let companies sort of take and adopt and adapt for their own use so that we sort of lowered the cost of compliance for companies. The AI Verify Foundation has over 80 companies now who’s joined us from so all over the globe and we did think that it was important to bring different voices to the table at the industry level but also at the end-user level to understand what were the fears and concerns that people had on the ground. So it’s been a constant sort of conversation that we’ve had with our public and with our companies, with international organisations, with other governments. All of the aim of I guess interoperability, global alignment but also to sort of encourage a sort of open industry focus lens and that’s so generally the way we have approached a lot of these issues in sort of critical emerging technologies, frontier technologies where we may not know what the answer is. The last piece I’ll say is that we’ve also been looking at the question of standards and benchmarking and evaluations because a lot of that beyond the principles will be about that what are those technical standards and we do think that it is quite important to have international alignment on that as well and we do so hope that beyond general principles that’s where a lot of where the conversation will go. Thank you.rn to

Robert Opp:
Thank you so much and I want to tu our third kind of country focused example and we’re going to go to Alain Ndayeshimaye, I’m sorry Alain you’ll have to correct me on the pronunciation of your last name I’m so sorry, who works at the Centre for the Fourth Industrial Revolution based in Rwanda and you know we know you know as a Centre for the Fourth Industrial Revolution it’s by nature a multi-stakeholder endeavor and I guess my question for you is what’s the situation you see on the ground in Rwanda and what’s how can multi-stakeholder approaches help with building the capacity of local digital ecosystems to engage in AI?

Alain Ndayishimiye:
Yes thank you moderator once again let me take the opportunity to greet everyone whatever you are in the world before I contribute to this esteemed panel allow me to extend my heartfelt gratitude to the UNDP team for inviting me to be part of this dialogue. As AI continues to shape our world the need for responsible and transparent practices have never been more pressing. AI has the potential to transform societies on a global scale but it also brings with it inherent risks if not developed, deployed and managed responsibly. This calls for a multi-stakeholder approach in addressing these issues. So as introduced my name is Alain Ndayeshimaye, I’m the project lead for AI and machine learning at the Centre for the Fourth Industrial Revolution. Our work revolves around the defined governance gaps for designing, testing and defining governance protocols and policy frameworks that can be developed and adopted by government policy makers and regulators just to keep up with the accelerated pace of the benefits of adopting AI while minimizing their potential risks. For Rwanda, AI is a leap forward technology that through appropriate design and responsible implementation can help advance Rwanda’s social and economic aspirations while becoming an upper middle income country by 2035 and a high income country by 2050. Even more, AI as a general purpose technology holds the power to achieve the UN sustainable development goals. In addition, AI has been identified as a driver of innovation and global competitiveness and this is as a result of the government’s dedication of harnessing the power of data algorithm as a catalyst for social and economic change and transformation. So in response to the question posed to me, allow me to reference our journey in developing Rwanda’s policy as a case study. Rwanda often referred to as the land of a thousand hills is now appraising to be the land of AI innovation. With our national air policy now formally approved, we have set forth transformative journey. This policy isn’t just a roadmap, it’s a testament to Rwanda’s vision and commitment to position itself as a beacon of responsible and inclusive AI on the global stage. However, it’s ambitious goals requires a strong foundation to build upon and this is where we bring the concept of stakeholder collaboration at the forefront and this is why we’re established as a centre. Our experience with multi-stakeholder approach has been both enlightening and transformative. Crafting and implementing a national air policy wasn’t a solitary endeavour. It was a symphony of collaboration between the Minister of RCT and Innovation of Rwanda, the Centre for Industrial Revolution, the public sector, international partners, academia, the private sector and civil society, collaborating towards a common goal. These stakeholders brought different perspective, experiences and expertise and reaching the policy development process. The process of developing AI policy wasn’t an inclusive and consultative one. The consultation and workshops were held enabling stakeholders to share their insights, concerns and ideas. By involving multiple stakeholders, the policy development process ensured transparency, accountability and participation, resulting in a more comprehensive and robust policy framework. One of the key benefits of a multi-stakeholder approach is the diversity of perspective it brings on the table. In the case of Rwanda’s AI policy, involving diverse stakeholders meant a holistic understanding of the current challenges and opportunities, resulting in a more nuanced, effective policy solution. The collaboration between stakeholders also helped build consensus and trust, fostering a sense of ownership of the policy among all stakeholders. Furthermore, multi-stakeholder approach promotes a knowledge sharing and capacity building among stakeholders, ultimately strengthening local digital ecosystems. In a development of Rwanda’s AI policy, stakeholders from different sectors and organisations shared experiences and knowledge, fostering learning and collaboration. This has not only resulted in a more comprehensive AI policy, but also heightened the capacity of stakeholders to effectively implement it. The multi-stakeholder approach has greatly aided Rwanda in establishing its AI strategy on a firm data governance foundation. As we all know, data serves as the lifeblood of AI, making robust data governance essential. By collaborating with stakeholders through thoughtful consultation, Rwanda’s AI policy now encompasses a stringent data protection and privacy guidelines. And this aligns with the principles of the recently enacted Rwanda Data Protection and Privacy Law that will help co-design, which mandates the safeguards and upholding of data privacy of processing of any processing data of Rwandan residents. In conclusion, the multi-stakeholder approach has undoubtedly played a critical role in strengthening local digital ecosystems in Rwanda and building a foundation of our strategy. It has promoted collaboration, knowledge sharing, capacity building among stakeholders, resulting in a more comprehensive and effective AI policy. This approach has not only fostered inclusive and responsible development of AI, but also builds on the trust and confidence among stakeholders, promoting sustainable and inclusive growth of local digital systems. Furthermore, collaborative risk assessment informed by various stakeholders enables us to identify and mitigate any diverse AI-associated risks. Moreover, by collaborating with our international partners, we are aligned and aligned our local AI initiative with global-based practices, ensuring that Rwanda is at the forefront of AI, both locally and internationally. Thank you for the opportunity to speak. Over to you, moderator.

Robert Opp:
Thanks so much, Alain. Some really interesting observations there. And actually, the last thing you said was looking at what’s happening globally. And that’s actually where I’d like to turn the conversation now. We have a couple speakers who are going to talk to kind of a zoomed out perspective and sort of looking overall. So with that, I want to turn to Alison Gilwald, who is the Executive Director of Research ICT Africa. And you’ve been working across the African continent on some research to understand where countries are with their AI readiness. We’ve just heard an example from the Rwanda case. But if you zoom out a bit, what are some of the takeaways that you’re seeing from the African experience so far with AI?

Alison Gillwald:
Thank you very much. So I think, you know, when we when we speak about the digital readiness of AI, we’re actually asking the same question as we did about the digital readiness for the data economy, the same questions we were asking about the digital readiness for broadband or internet. Because in fact, many across the continent, many of those foundational requirements are still not met. So many, many countries, you know, Rwanda, Lesotho, many, many countries actually have now 95% plus broadband coverage, mobile coverage, high speed broadband coverage. And yet we have, you know, less than the sort of 20% critical mass that we know to see the kind of network, network effects, the benefits of, of being online, of broadband, of, you know, associated with economic growth and those kinds of things. So they, you know, they’re still analog, existing analog problems. And they’re also still, you know, enormous digital backlogs. And what our research that we do nationally representative surveys, access and use surveys, they used to be, but now they’re kind of very much more comprehensive, looking at financial inclusion and platform work and all sorts of other things. So they really give us a better sense of the maturity and the, you know, what what people are actually doing. Those studies are done across several countries in Africa. And what we see actually, is that the real challenges around the demand side issues. So yes, you know, the biggest barrier actually to the internet is actually the cost of the device. And, you know, there are all sorts of associated policy issues around that and, you know, things can be done. And then of course, once people are online, you get this very minimal use of, of data, of broadband, because people can’t afford it. The affordability side is actually that, you know, this is the demand side, the pricing is the supply side. And that goes to our business models, our regulatory models, our lack of institutional capabilities or endowments to do some of the effective regulation you would need of these very imperfect markets. But only the real challenges are on the demand side. And, you know, all the kind of aggregated gender data that you get that presents, you know, this growing disparity between women and men, which is not true across all parts of the continent at all, is really around education. The thing that is driving access to education, whether you can afford that device or everything, is education. And that’s from the modelling that we can do, because these are demand side, fully, you know, representative of the net, of the census frame, demand side studies. And of course associated with education is income. So, you know, people who are employed. And it’s because women are concentrated amongst those who are less educated and employed. In fact, gender, if you can’t profit on its own, is not necessarily a major factor. And then, of course, multiple other factors. So, you know, the much greater factor than gender actually is rural location, rural location, but a number of intersectional factors that really impact people’s participation. So a lot, you know, a lot of the demand driven new technology frontier strategies are looking at some of the supply side issues, and of course are looking at the high-level skills issues that you need. Short of data scientists or data engineers or that sort of thing. But it’s actually, you know, it’s this fundamental human development challenge, but also this fundamental ecosystem, you know, the economy and society, that really has to be addressed fundamentally if we’re going to be able to address these higher level issues. And so, you know, just questions of absorption. Even if we are thinking about trying to create, you know, public sector data sets that could be used for the public sector, you know, planning and purposes, so it’s kind of building some public value out of this. I think that’s an important point that we need to come back to, because I think a lot of the AI kind of models are driven by, you know, commercial value creation, which of course we desperately want on the continent, and the kind of innovation, you know, discourse, which of course we want on the continent. But actually to get there and to make sure that that is, you know, equitable, inclusive, just, requires that some of these other factors actually drive policy. And, you know, basically the kind of absorptive capacity of your firms, the absorptive capacity of your citizenry. You know, we see, for example, many, many countries with, you know, now planning AI applications for government services, which, you know, historically are, you know, if you’ve got less than 20% of your population connected, then, you know, digital services become a, you know, vanity project, unless you actually can get people that can, you know, you can use these services more effectively. And I think that’s why the, you know, this enabling environment, these foundational requirements that we have are absolutely essential. And the, you know, we speak about a lot in terms of the infrastructure side. and the human development side of things. But the enabling legal environment, the enabling, you know, human-centered, as you called it, but, you know, rights-based environment, as we’ll see as it plays out, is actually absolutely essential foundations for building this kind of environment. And so I just briefly want to touch on, because it might seem tendential to AI, but actually we think is an absolutely critical step in creating these conditions is the African Union Data Policy Framework, which has really created this enabling environment that you need. The first half of the framework really deals with these enabling conditions. I mean, we don’t call them preconditions because we don’t have the luxury of getting, you know, 50% of people online or, you know, more than the majority of your country with a digital ID or, you know, a data infrastructure in place. So these things have to happen, but they’re very strongly acknowledged. So there’s a very strong component in the data policy framework that creates this enabling environment that has really leveraged the African continental free trade area in getting member states, I think, to understand that unless they have this digital underpinning for the continental free trade agreement, which is a single digital market for Africa, they’re not simply not going to be the beneficiaries of a common market. And I think that’s allowed some leverage on that. But it’s also allowed us to return to some of the challenges we’ve had around, you know, a human rights framework. And I think there’s a, it’s a high-level principle document, but there’s a commitment to progressive realization of very ambitious, and I think, you know, absolutely laudable and good objectives that we now need to get to. There’s an implementation plan, so there’s countries can actually be supported. I think that’s been our biggest challenge. I think Sri Lanka was actually speaking about the challenge of implementation being such a great one, so there’s an implementation strategy now. But I think, you know, the important part is that we can kind of, we can come back to some of these foundational things that we haven’t got right. You know, there’s lots of talk about a trusted environment. There are a lot of assumptions from so-called best practices from elsewhere in the world that assume, you know, institutional endowments, regulatory autonomy, you know, competitive markets, you know, skills and ability in these markets that simply isn’t there. And I think, you know, the document importantly points out that, you know, of course, cybersecurity is important for, you know, for building trust, data protection. These are necessary conditions, but they’re not sufficient conditions. And so the questions around, you know, the legitimacy of the environment that you’re in, if you’re wanting to build, you know, a kind of digital financial system that’s going to engage with a common market, these kinds of things all become really important. And so it’s got very kind of clear action plan alignment of, you know, various potentially conflicting legacy policies that might be there. And of course, the big acknowledgment, which I will try and make the last point because I may have just run over. But I think, you know, the issues particularly with data governance. Sorry. The issues particularly with data governance but have implications for AI, very strong implications for AI, are that, you know, we’re setting up a lot of national plans. And of course, that’s all we can do at one level. But essentially, these are globalized. And we would argue, you know, digital public goods that we now need to govern through global governance frameworks. A lot of the things we want to do, particularly the, you know, safeguarding of harms are very often, you know, we’ve got our local companies. We try to build local companies. But, you know, 90% of the data that’s extracted from Africa, you know, goes out of Africa. It goes to big tech and big companies. So these national strategies have to be located globally. And the other side of that, also from a global governance point of view because we no longer can do this, which we would usually do with public interest regulation. And again, I think a lot of the focus is on the, you know, the negative things of AI. And so you’ve got to build this, you know, compliance regime, harms, you know, protection compliance regime, is the lack of attention, which we do see in a lot of OECD work in this area, is about the economic regulation that you need of the underlying, you know, data economy, access to data, access to quality data, those kinds of things, you know, open data regimes, which are in the data policy framework, governance component, by the way. But I think there’s, you know, a lot of the discussions that we’ve had this week, a lot of emphasis on safeguards, harms, privacy, but not a lot on what you would really need to require to redress the uneven distribution that we see in opportunities, not just harms, which we do see as well, you know, between countries of the world, but also within countries. Well, speaking of OECD, we just happen to have them here.

Robert Opp:
But Allison, thank you for opening up a huge can of worms there on multiple levels of global governance and things. We won’t be able to get to all of those, but really interesting insight in the Africa experience so far. So I want to turn to our next and last speaker for kind of our initial set of speakers here, Galia Dower, who’s from the OECD. And, you know, OECD, as Allison was saying, has done a fair bit of work in this space, and you’ve produced a set of AI principles, and I know you’re working on toolkits and guidance and things like that, but maybe tell us a little bit more what you see from the global level here about what countries are asking for, what the state of readiness is, just what you’re seeing in general.

Galia Daor:
Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, and I will try to do justice to the OECD’s work, but also recognizing that there really are challenges and also I think not one organization, or obviously not one country, can address all of them. So I think at the OECD we come to this from the perspective of, yes, a set perhaps of assumptions, but I think it doesn’t replace, I think, other work that needs to be done. So maybe just to sort of get a bit into that work, so the OECD started working on artificial intelligence in 2016, and then in 2019 we adopted the first intergovernmental standard on artificial intelligence, the OECD AI principles, and these are sort of a set of five values-based principles that apply to all AI actors, and a set of five policy recommendations for governments, for policymakers. The values-based principles are sort of about what makes AI trustworthy, and also go into some of what other speakers have mentioned on the benefits of AI, but also the risks, and I think both are important. So elements like using AI for sustainable development and for well-being, also sort of having AI as human-centered, and as well as risks such as transparency, security, importance of accountability. Separately, the policy recommendations, so I think perhaps linked to what Allison said without sort of prejudging the situation of any specific country, sort of looking at what a country would need to put into place in order to be able to achieve these things. So R&D for AI, but also the digital infrastructure, including data, including connectivity. The enabling policy environment, the capacity, the human capacity building, and of course, international and multi-stakeholder collaboration, which is a point that others have made already. So the principles are now adopted by 46 countries and also serve the basis for the, sorry, including Singapore as was already mentioned, including other countries like Egypt, and also serves as the basis for the G20 AI principles. And as was mentioned, our work now is focusing on how to support countries in implementing these principles, so how to translate principles into practice. And sort of looking at perhaps three types of actions that we’re taking, so focusing on the evidence base. So one aspect is to look at what countries are actually doing, so looking at national AI strategies that countries around the world are adopting. So we have an online interactive platform, the OECD.AI Policy Observatory that has already more than 70 countries in it. And what we’ve seen, for example, since we started this work, then at least of what we know, that 50 countries have adopted national AI strategies, which I think is an interesting data point. The observatory also has other data on AI, including sort of investment in AI in countries around the world, research publications, so to see which countries are more active in this space, and what they’re doing, jobs and skills, and sort of movement around the world of jobs and skills for AI. So a lot of sort of wealth of information there. We also have an expert group that, a network of experts, which is multidisciplinary and international, with sort of very broad participation. And we’re also developing sort of practical tools to support countries, and organizations, sorry, I should say, in implementing the AI principles. Perhaps one last point that I would mention, sort of in terms of what we’re seeing with these principles now, then one thing is we see that they are impacting national and international AI frameworks around the world with the definition of AI that’s in the OECD principles, but also our classification framework for AI systems. And the other thing that I’ll say is that we are also supporting countries, if they’re interested in sort of developing or revising their national AI strategies to align with the AI principles. So this is work that, for example, we’re now doing with Egypt. But I’ll stop here, and I really look forward to the discussion. Thank you.

Robert Opp:
Thanks so much, Galia. And the time is racing by. I can’t believe it. We have about 15 minutes left in this session. And I’ll do my best now to open up for some questions. And Jingbo, I wonder if you want to make a couple remarks as well, just to put you on the spot. But before, so think of your questions now. Before I turn to those, just to mention a couple things from the UNDP side, we are doing digital programming or supporting digital programming in about 125 countries, 40 to 50 of which are really kind of looking at national digital transformation processes and some of those foundations that Alison was talking about. Because we really see the importance of building an ecosystem. This doesn’t happen with fragmented solutions. This happens when you build the kind of foundational ecosystem that is comprised of people, the regulatory side, the government side, the business side, and so on and so forth, as well as your underlying connectivity and affordability. And we’ve started also to kind of an additional process to that, which we’re calling the AI readiness process that basically can complement that. And it really looks at, and this is what Dr. Ranawana was talking about, where we’ve been working to support Sri Lanka, Rwanda, and Colombia currently on looking at how does government serve as an enabler and how is society set up in terms of being able to handle artificial intelligence in terms of capacity and some of those foundational issues. And this is something that we have been doing. It’s been piloted together with or in the auspices of an interagency UN process that’s led by ITU and UNESCO. And something that we hope will be one of the tools that are available to countries in the toolkit as they seek to address these issues, taking that kind of ecosystem approach. So if there are any of you who are representing national interests here and would be more interested in that, please let us know. With that, I think I’d like to turn over. Jingbo, I was pointing to you because Jingbo Huang is the director of the UN University in Macau and has a research initiative focused on AI. And if you want to take the floor, I don’t want to put you on the spot, but if you had any quick observations, and then I’ll turn to some questions. We’ve got a question here and a couple online. Is that okay? I didn’t warn you before. I’m sorry.

Jingbo Huang:
Thank you, Robert. I’m here to learn. My name is Jingbo. I’m the director of UN University Research Institute in Macau. So we are a research UN organization, and our work is mainly related to, you know, AI governance. So, for example, we conduct research training education from the angle of the biases related to gender, children in the algorithm, and we have done research in collaboration with some UN organizations, for example, UNESCO, ITU, UN Women, and soon to be, hopefully, with UNDP. So I’m really here to have an open mind to learn about this topic. We saw a very nice overview and pictures from Africa, from Asia, from OECD, so it’s really a great learning. So the one keyword that comes into my mind is a collective intelligence, and it’s not only the collective intelligence between people and people, and we talk about regulatory framework, business. We have all these entities among human to work together to make this infrastructure ready, and we’re also talking about machine intelligence, if we call them intelligence, and the human intelligence working together. How are we taking it? Like what Robert has said at the beginning, it’s not only about the dark side. So how do we bring the bright side together? So the collective intelligence is the keyword that just emerged in my mind. So I have, like, two questions since I’m learning here. So the first question is related to the different tools and the frameworks that OECD developed, that Singapore developed, and maybe Africa also has developed, and also UNDP. So how do these tools work together? For example, I just learned the concept of UNDP’s AI readiness assessment tool, and now I heard about your different tools. How do these tools work together? Or maybe they don’t. So this is the first question. Second question is to all the panelists about what keeps you awake at night now? Because this is important for me to learn, what are the challenges you’re facing right now in this implementation process, in this conceptualization process? I have the overview, but I want to know the pain points. Thank you.

Robert Opp:
All right, we’re going to quickly just go to a couple questions here so that we then will have time for response from panelists.

Audience:
Thank you very much. My name is Auke Aukepals, and I work for KPMG in the responsible AI practice. And first of all, I was triggered by this session title, and so you did a good job with the session proposal. So, I work for KPMG, the Netherlands, but also coordinating our efforts globally. And what we see is a large difference in countries just acting in a democratic way itself. And also, being part of the ethics work stream, yeah, really gives me a broad view of the entire world, actually, as certain countries that are having no democratic processes in place, others do. So with our advisory practice, it’s really difficult to advise on ethics with a country that has no clue what that’s about, to be a little bit proactive about that. So that’s really difficult. Also, asking your question, are countries ready? No, definitely not yet. Because coming from the Netherlands, we also see even issues in our own country, which relatively is quite democratic. However, yeah. So we really need to cooperate together. And also thanks to the OECD guidelines and principles, they really function well. And we use them in our daily work and daily basis. And also happy to contribute on next iterations, if possible. But yeah, these are my observations from the outside. Thanks. We’re going to take one more question here. We’ll go online quickly. And then I think we’ll just have a chance for panelists to come back once and then we’ll close. Hi. I am Armando Manzuela from the Dominican Republic. First of all, I’d like to thank all the organizations for doing this amazing session. All the people that were intervening have done a remarkable point regarding AI for development in this case. Well, there’s a thing here with AI. And it’s the way that it’s being promoted by the companies, by international organizations that are promoting that AI will transform the world, that will change everything, which is actually right. But there’s the thing that in the race to become AI proficient at all levels in most nations, especially in the global south, has been taken into, I must say, not necessarily the right direction. Because we’re focusing on implementing algorithms, implementing solutions that are AI infused to do a myriad of things, especially in government. But the main problem is that we don’t have the core elements for doing a transition to an AI-based society just yet, starting with data. So we have problems with data quality, with data collection, with how we assure that the data is correct so we can prevent biases. And of course, we don’t have the infrastructure in place, and yet most of the countries have inadequate data protection and privacy laws and regulations. So given this situation, and knowing how things are moving and how things are approaching, how do we propose or create a set of rules, a set of frameworks that help to guide the countries into the right direction regarding data? Because when we talk about AI, we’re really talking about large language models, which is just data. So if the data is not right, how we can implement properly AI solutions that actually help our country to develop? And this is moreover the question in the global south we’re asking now. Dominican Republic. Thank you, Armando. Okay, let’s go online very quickly, because we’re really running out of time, and I’m going to turn to my colleague who’s on my team, Yasmin Hamdar, who’s been moderating online. And Yasmin, I’m sorry to make you do this, but can you just pick one question for the โ€“ I know you’ve got more than that, but just pick one and ask, please. First, thanks, Rob. So we have one interesting question. Given the rapid advances in AI capabilities, how can governments ensure that its technical infrastructure and workforce skills are agile enough to adapt to new AI technologies as they emerge?

Robert Opp:
Okay, how to make sure the workforce is agile enough, which is related, I think, to many of these. All right, so I’m going to go back to our panelists, and I think this, unfortunately, will have to be our closing round as well. And I think Jingbo’s given a good question that I’d like all of you to answer, which is what keeps you awake at night. But if you’d like to speak as well to the questions about the tools, I will also have a response on that one. Also the issue of this sort of how do we get the fundamentals right? How do we get the right data? And those kinds of things. How do we work toward a collective intelligence? Dr. Ranawana, can I just turn to you first for your brief responses, please?

Dr. Romesh Ranawana:
Of course. I mean, essentially, the problem is, like it’s been mentioned so many times, are the foundation elements. And for us, one of the biggest obstacles to take our AI ambitions forward and also to provide the benefit to the people, especially in terms of government efficiency, corruption, making things more available. Sri Lanka is fortunate that we have good connectivity and about 90% of the population does have connectivity available to them. But the lack of data, I think, has been highlighted so many times, is probably our biggest problem. Data is extremely siloed and it’s still available in paper format in a lot of situations. So how to first digitize it, standardize it, and then make it available to those who need it in a fair and responsible manner is probably our biggest challenge now. And that’s not only a technical challenge, but also an operational challenge. It’s changing mindsets, awareness, trust in these systems. And that’s something that we are really struggling with on how to take that forward. Thanks so much. Is that what keeps you awake at night? Absolutely. That is definitely one of the big ones. Because like I said, we have so many people doing AI projects, but they’re running AI projects on data that they download from the internet, data related to other countries. We don’t have projects running on Sri Lankan problems because we just don’t have those problems available. So all these efforts are being wasted because we don’t have a consolidated set of datasets to address national problems. Thanks so much.

Robert Opp:
Alain, let’s go to you next. What keeps you awake at night?

Alain Ndayishimiye:
So the technical development and deployment of AI is… So here I’m referring to ethical considerations when developing and actually deploying these technologies. It’s what often keeps me up at night. Concerns around risk associated with technology such as biases in AI models, potential privacy breaches, and broader society impacts such as job displacement and misuse of AI in areas such as surveillance and autonomous operatory are one of the things that actually keep me at night. So ensuring that AI is used responsibly and benefits all societies per month, and it’s a challenge that requires continuous vigilance and adoption. And please allow me to also talk on how there’s a question around how these instruments need to work together. So let me speak on harmonization, especially in the African context. Harmonizing policy and regulatory efforts among African countries is not only pivotal for their participation in the global digital economy, but also provides a unified front when dealing with large multinationals that are at the center of this global digital data economy transformation. Such harmonization efforts force economic integration, enabling smoother cross-border trade and investment, and promote standardization, reducing complexities of different regulation. It also facilitates the development of shared digital infrastructure, ensuring connectivity across regions. A unified stance, Africa’s voice in global negotiations, ensures a better representation of its interests. By addressing shared digital challenges collectively, Africa can devise effective solutions, attract global tech giants through a consistent regulatory environment, and inspire innovation. Furthermore, harmonized approach ensures the continuous web, consumer protection, robust data privacy standards, and boost African competitiveness in the digital realm. In an essence, a coordinated policy framework is essential for Africa to leverage the digital economy and benefits and positions itself as a significant player within this space. So thank you once again for this opportunity. Over to you.

Robert Opp:
Thanks so much, Alain. Denise, let’s turn to you. What keeps you awake at night on this issue, and any other comments you want to make?

Denise Wong:
Thank you. I think on a global level, I worry about fragmentation. I think we’ve been in this space for a long time now in different areas where global laws are fragmented, and that just raises compliance costs for everyone. So I think we have an opportunity to do it right and have that conversation early, and we should try and do that. I think at a more domestic level, I worry about leaving vulnerable groups behind, even in a society that’s highly connected and highly literate like Singapore. There’s always that fear that technology will widen divides and create harms that we cannot anticipate to groups of people that we should be protecting the most. And I guess the third thing I worry about is cultural sensitivities and ethnic sensitivities, especially with black box technology. It’s hard to predict whether the technology is going to fragment and divide, or it’s going to unify and cohere. And so part of what we do is to try and unpack what it means from a culturally specific lens. And that is really about AI for the public good.

Robert Opp:
Thanks Denise. I think I’ll turn to Alison.

Alison Gillwald:
Thank you. Sure. What keeps me awake at night is the inevitable implication of inequality, unless we address some of the underpinning structural inequalities that are leading to this. And I think that’s very likely if we simply take these blueprints and take them from countries with completely different political economies, conditions, and just implement them onto these societies. And just in that regard, I have to say that although the challenges of having democratic frameworks within developing AI policy is obviously a challenge for many of us, but I think we really need to appreciate that actually the ethical challenges that we are facing are with some of the biggest tech companies that come from, at least some of them, come from the biggest democracies in the world. So I think the ethical issues should be addressed globally and can be addressed globally. And just finally to say that, you know, the point that was being made about we can’t actually unbias these big data sets because the countries like Sri Lanka was mentioning, the countries aren’t digitized. People are not online. We simply can’t unbias the invisibility, the underrepresentation, and the discrimination that we’re seeing in algorithms currently.

Robert Opp:
Kalia. Yeah.

Galia Daor:
So very quickly, just to say, I think I can really relate to a lot of the things that Denise said about the fragmentation, and this is a real concern. I think what keeps me up at night is also that we will miss out on the opportunities that AI has to really, that I think ultimately have the potential to make everything better for everyone if we do it right. And I think it’s too big to miss, and that means that it’s something that we can’t leave to just companies, we can’t leave to a certain set of countries, which I guess leads me to this has to be, and because AI itself is global, because it has no border, then it has to be a collaborative effort, and that needs to be genuinely collaborative, and I think this is a good, it’s not a start because we’ve been in that process for a while, but I think this kind of conversation is really important. Thanks.

Robert Opp:
Thanks, Kalia, and thanks to all our panelists, and just we’re over time, and I’m sure we’re going to, yeah, I’m getting the nod, but I would just say a couple things to try to sum up what I’ve heard, and to add a little bit of my own insomnia or sleeplessness to this. You know, I think we’ve heard, there are certainly the challenges here, and the challenges that have been named are things like fragmentation, and the foundations, and it’s so important to get the foundations right, which is hard, you know, this is not a simple process. It involves a lot of moving parts, and a lot of complexity, and a lot of issues around financing and everything else, but we have to do it, and countries, we have to help countries get there, and I’m talking to myself, that’s partly our role, but, and if I sort of add what keeps me awake at night, it’s very similar to what is being mentioned here. If we, as the United Nations system, stand for leaving no one behind as part of the 2030 agenda, then if we say that artificial intelligence is a major opportunity for humanity, but artificial intelligence is only as good as the data behind it, and the training behind the data, and, or the training of the data, and the production of the algorithms, so how are we going to ensure the representation and diversity in the underlying data sets, and the models that are put forward, because these will not be culturally relevant to everyone’s worldview. We work with indigenous communities across the world, with thousands of local languages, these represent different worldviews, and human development is not about everyone becoming the same, it’s about every human realizing their own potential, so, that being said, the opportunities here are that, I think what we’ve heard over and over again is the multi-stakeholder approach is really critical, and if we’re going to bring in those worldviews, it’s going to have to be an intentional consultative process, and I think being human-centered in all of this is a method of risk management. This is a way to ensure that we build the basis and the foundation that we really need, and I know I’m missing out some of the nuanced points that were made, but I’m really very grateful for all of you for, first, our panelists for having spoken today, and giving us some insights, and for all of you who’ve joined us in the room, as well as online, and please do reach out to us at UNDP, or the other panelists in their organizations for any other questions or support that we might be able to give, and we will get through this together. So thank you very much. Please give ourselves a round of applause. Thank you. Thank you. Thank you.

Alain Ndayishimiye

Speech speed

160 words per minute

Speech length

1287 words

Speech time

482 secs

Alison Gillwald

Speech speed

178 words per minute

Speech length

2133 words

Speech time

718 secs

Audience

Speech speed

143 words per minute

Speech length

730 words

Speech time

306 secs

Denise Wong

Speech speed

165 words per minute

Speech length

1029 words

Speech time

375 secs

Dr. Romesh Ranawana

Speech speed

174 words per minute

Speech length

1246 words

Speech time

429 secs

Galia Daor

Speech speed

162 words per minute

Speech length

957 words

Speech time

355 secs

Jingbo Huang

Speech speed

163 words per minute

Speech length

400 words

Speech time

148 secs

Robert Opp

Speech speed

172 words per minute

Speech length

3039 words

Speech time

1062 secs

Accelerating an Inclusive Energy Transition | IGF 2023 Open Forum #133

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

In different parts of the world, there is variation in the technologies used for energy distribution. Some regions rely on gas, while others rely on electricity. This highlights the global disparity in energy usage and the need for equitable access to energy resources.

The impact of technology on the environment is a crucial consideration, as its consequences become more significant with advances in technology. It is essential to assess the environmental impact of new technologies and develop sustainable alternatives.

Furthermore, fairness in energy distribution and technology usage globally is emphasised. The use of different energy sources, such as gas and electricity, underlines the importance of ensuring equal access to energy resources, reducing inequality, and achieving affordable and clean energy for all.

The concept of “clean code” is also discussed, which refers to efficient and well-optimized software that consumes less energy. Clean coding practices can contribute to responsible consumption and production, aligning with the Sustainable Development Goal of responsible consumption and production.

The analysis also raises concerns about the energy consumption associated with Artificial Intelligence (AI). While AI has positive impacts, it also presents challenges in terms of energy consumption. The consideration of energy consumption in AI development and policy-making processes is essential to address its environmental implications.

In conclusion, the analysis highlights the lack of global uniformity in energy distribution technologies. It stresses the need to consider the environmental impact of technological advancements and work towards equitable energy distribution. Additionally, the importance of clean coding practices and the need to address energy consumption in AI development are emphasized. By addressing these issues, we can move towards sustainable energy practices and responsible technological development.

Chantarapeach Ut

The analysis explores the importance of supporting youth-led innovation and entrepreneurship in green technology. It emphasises the need to nurture and financially support young people’s initiatives in this field. Examples of youth-led technology innovations include green energy engineering, smart agriculture, renewable energy optimizations, air quality monitoring, green buildings, climate modeling, and eco-friendly transportation. By empowering young people to develop environmentally-friendly technology, we can make significant progress towards achieving SDG 7 (Affordable and Clean Energy) and SDG 13 (Climate Action).

Additionally, the analysis highlights the significance of raising awareness and exposure to green jobs for young people. Green jobs contribute to sustainable energy advancements and include positions such as green AI researchers, sustainability data analysts, renewable energy engineers, and clean tech researchers. By informing and inspiring young people about these opportunities, we can encourage them to pursue careers that contribute to a sustainable energy future. This aligns with SDG 7 (Affordable and Clean Energy) and SDG 8 (Decent Work and Economic Growth).

Furthermore, involving young people in decision-making processes related to digital policy and climate change is essential. Platforms like the Cambodian Youth Internet Governance Forums and the Local Conference of Youth under UNGO offer spaces for young people to participate and express their ideas on these crucial matters. Inclusive energy transition can be accelerated by incorporating youth perspectives, leading to more effective and inclusive energy systems. This involvement aligns with SDG 7 (Affordable and Clean Energy) and SDG 13 (Climate Action).

The analysis also highlights the need to harness renewable energy in Asia more efficiently and effectively. Currently, many Asian countries heavily rely on fossil fuels. However, by focusing on renewable energy technology and improving energy sharing arrangements, Asia can reduce its dependence on non-renewable resources and promote sustainability. This aligns with SDG 7 (Affordable and Clean Energy).

Moreover, the analysis mentions Chantarapeach Ut, a youth advocate representing a team committed to energy transition. Ut emphasises the importance of adult support and guidance in directing youth efforts towards achieving inclusive energy transition. Collaboration between young people and adults is crucial in driving effective change.

In conclusion, this analysis advocates for supporting youth-led initiatives and involvement in green technology, raising awareness of green jobs, including young voices in decision-making processes, and harnessing renewable energy in Asia. It highlights the need to empower and engage young people to accelerate the development of a sustainable energy future and address climate change. Collaboration between young people and adults is vital in driving inclusive energy transition. This analysis serves as a call to action for governments, organizations, and communities to invest in empowering and engaging young people in achieving a sustainable energy future.

Neil Yorke-Smith

The use of artificial intelligence (AI) in the energy system is already proving to be beneficial in various areas. AI is being utilized in forecasting, system design, real-time balancing, demand response, and flexible pricing. These applications of AI can enhance the efficiency and effectiveness of the energy system, ultimately contributing to a transition away from fossil-based fuels.

However, it is crucial to consider the ethical, legal, social, and economic aspects of implementing AI in the energy sector. While AI offers promising solutions, the societal impacts and implications of an AI energy system are not yet fully understood. Therefore, thorough study and examination of these aspects are necessary to ensure responsible and sustainable implementation of AI in the energy sector. It is also equally important to consider values, trust, justice, and fairness in addition to technical efficiency when incorporating AI into the energy system.

The principles of trustworthiness, justice, and fairness should guide the use of AI in the energy system. Trustworthiness involves establishing meaningful control and collaboration between humans and AI, ensuring that AI systems are reliable and accountable. Justice entails considering whether the benefits derived from AI in the energy system are distributed equally among all stakeholders. Fairness relates to the design of energy markets, ensuring they are efficient, effective, and fair for all participants.

Lessons can be learned from both European and non-European contexts when implementing AI in the energy system. Countries like the Netherlands can benefit from studying the experiences of other nations, while shared resources and concepts from countries like Nigeria can potentially be valuable in the development of AI energy systems.

Another important consideration is the incorporation of societal values into the design process of AI technology. The concept of value-sensitive design emphasizes the importance of incorporating the values of potential customers and society into the design of AI systems. This approach ensures that technology aligns with the values and needs of society, promoting responsible consumption and production.

Efficiency in code design is also seen as crucial for sustainability. By focusing on efficiency, developers can reduce the size and resource consumption of AI algorithms. For example, an app that could be 500 megabytes can be streamlined to 100 megabytes through code efficiency. Recognizing efficiency as a non-functional requirement of AI algorithms can help drive sustainability efforts in the energy sector.

Neil, as an expert in the field, highlights the significance of considering long-term decision-making and the potential evolution of values. Decisions made today can have lasting consequences, especially in terms of infrastructure that can last for decades. Therefore, it is essential to be aware of how present choices can impact the future and how societal values may change over time.

Accountability is a crucial aspect in the discussion of AI and the energy transition. Those who develop AI systems should be held accountable for their actions and the impact of their technology. In addition, society itself should be more accountable towards the energy transition, recognizing its role in promoting sustainable energy practices.

Lastly, global cooperation and learning from each other are vital in the energy transition. By working together and sharing knowledge and experiences, different regions can contribute to the successful implementation and advancement of AI in the energy system. This collaborative approach promotes shared goals of affordable and clean energy, climate action, and sustainable development.

In conclusion, the use of AI in the energy system has the potential to bring substantial benefits, but careful consideration must be given to the ethical, legal, social, and economic aspects. Trustworthiness, justice, and fairness should guide the implementation of AI, and lessons can be learned from diverse contexts. Incorporating societal values, ensuring code efficiency, considering long-term decision-making, and fostering accountability and global cooperation are essential for a successful energy transition.

Alisa Heaver

An analysis of the provided information highlighted several key points discussed by the speakers at the event. One of the main concerns raised was the potential increase in energy demands due to the growth of artificial intelligence (AI). Projections suggest that by 2027, the energy requirements for AI could be equivalent to the entire Dutch economy. This staggering statistic emphasizes the need to address the energy implications of AI expansion and find sustainable solutions to meet the growing demand.

In line with a focus on sustainability, the importance of sustainable digitalisation was also emphasised. The Dutch National Coalition has taken up the task of working towards sustainable digitalisation, recognising the need to balance technological advancements with responsible consumption and production. This approach reflects the commitment to aligning innovation and infrastructure with the principles of sustainability outlined in SDG 9.

Accountability was another key theme discussed during the event. The importance of ensuring accountability in the development and implementation of AI systems, particularly in relation to the energy transition, was highlighted. The conversation was conducted with international representatives, providing a global perspective on these issues. This emphasis on accountability indicates the recognition of the potential risks associated with AI development and the need to establish standards and guidelines to ensure responsible and ethical practices.

Another noteworthy observation from the analysis is the call for increased attention to sustainability within the Global Digital Corporation (GDC). Alisa Heaver, one of the speakers, noted a lack of mention of sustainability in the policy brief of the tech envoy and urged a greater focus on this topic. She emphasized the historical significance of the venue, where the Kyoto Protocol was signed, as a symbolic reminder of the importance of prioritising sustainability in the context of digitalisation and global cooperation.

Lastly, the intersection of sustainability and digitalisation was highlighted as crucial for future progress. The combination of these two areas was recognised as a key factor in driving sustainable development and achieving the SDGs. The increased discussions around sustainability and digitalisation were appreciated, implying a growing awareness of the need to balance technological advancement with environmental responsibility.

In conclusion, the analysis of the provided information reveals key points discussed by the speakers at the event. These include concerns over the energy demands of AI, the importance of sustainable digitalisation, the need for accountability in AI development, the call for increased attention to sustainability within the GDC, and the recognition of the intersection between sustainability and digitalisation for future progress. These insights shed light on the challenges and opportunities presented by AI and underscore the importance of integrating sustainability into technological advancements.

Jessie

The session entitled “Accelerating an Inclusive Energy Transition” commenced with a video, setting the stage for subsequent discussions. Alisa Hever, a senior policy officer at the Dutch Ministry of Economic Affairs and Climate Policy, welcomed participants to the open forum and emphasized the indispensable role of a live moderator despite the potential of artificial intelligence (AI) for innovation.

The session aimed to explore the opportunities and challenges associated with achieving an inclusive energy transition, with a focus on the significance of AI in driving innovation in the energy sector. A diverse panel of experts provided their insights and engaged in stimulating discussions throughout the forum.

One key discussion point centered around the potential of AI in hastening the transition towards a more inclusive energy system. The experts highlighted how AI-powered technologies could enhance energy efficiency, facilitate effective demand management, and support the integration of renewable energy sources. It was underscored that accessible and affordable solutions should be developed, benefiting all communities, particularly marginalized groups.

The panelists also addressed concerns and challenges relating to the implementation of AI in the energy sector. They emphasized the need for robust regulations and ethical frameworks to ensure transparency, fairness, and accountability. Furthermore, addressing potential biases in AI algorithms was deemed crucial to prevent the exacerbation of existing inequalities.

Throughout the session, the importance of collaboration and dialogue between policymakers, industry leaders, and civil society was emphasized. Engaging multiple stakeholders from different sectors and regions was considered vital for fostering inclusive decision-making and ensuring equitable distribution of the benefits of an energy transition.

In conclusion, the session underscored the immense potential of AI in driving an inclusive energy transition. Discussions highlighted the pivotal role of a live moderator in facilitating meaningful exchanges and creating an environment conducive to collaboration. By employing AI in a responsible and inclusive manner, it is possible to overcome challenges and expedite the transition towards a sustainable and equitable energy future for all.

Noteworthy observations from the session included the recognition that technological advancements alone are insufficient for achieving an inclusive energy transition; a holistic approach encompassing social, economic, and environmental dimensions is necessary. Additionally, the session stressed the urgency of addressing equity and social justice issues to prevent the perpetuation of existing disparities in energy access and affordability.

Tim Vermeulen

The energy landscape in Europe is undergoing significant transformations, presenting new opportunities and challenges. One major development is the increasing use of AI in energy management, which has the potential to revolutionise the industry. However, it also introduces biases that can affect energy distribution and access. Wealthier neighbourhoods tend to benefit more from AI in energy supply, but efforts are being made to tackle this bias through transparency and sharing of information by power grid companies.

Open-source technologies are also gaining momentum in Europe’s energy sector, particularly for core grid capabilities. These technologies enhance grid forecasting capabilities, leading to better management and utilisation of energy resources.

Fairness is an important consideration, not just in energy distribution, but also in cutting CO2 emissions. Different regions, countries, and companies have diverse energy mixes and challenges. For example, the Netherlands heavily relies on natural gas due to its domestic availability. It is, therefore, crucial to view these differences from a modular perspective, considering the specific circumstances and needs of each entity.

Transparency, modularity, and technology are key factors shaping Europe’s energy landscape. A modular technology system allows countries to interact with one another, fostering an open and collaborative approach towards a more sustainable energy sector.

Sustainability, fairness, and integrity are highly valued from a European perspective. Access to energy is considered a universal right that should be protected and ensured for all. Maintaining the integrity of the energy system is essential for achieving sustainability and fairness.

Efficiency and awareness are vital in building applications that drive the energy transition. Clean and efficient code development is crucial across sectors as it directly impacts the transition to cleaner energy sources.

Every job has the potential to contribute to a clean and affordable energy future. Jobs across various sectors influence the energy transition, highlighting the importance of a comprehensive approach.

Technology plays a significant role in opening up new possibilities and advancing different areas within the energy sector. The potential of openness and complexity in technology is recognised by experts.

To foster the global energy transition, it is imperative to share knowledge and values on a global scale. Managing the knowledge-based landscape globally is fundamental to driving progress and collaboration in the energy sector.

In summary, Europe’s energy landscape is evolving rapidly, with advancements and challenges. The use of AI, open-source technologies, and the consideration of fairness in energy distribution and CO2 emissions are key focal points. Transparency, modularity, and technology are crucial, and sustainability, fairness, and integrity are highly valued. Efficiency and awareness in application development drive the energy transition, and every job contributes to a clean and affordable energy future. Technology’s potential lies in its openness and complexity, and global knowledge sharing is vital for the energy transition.

Hannah Boute

The Dutch National Coalition for Sustainable Digitisation focuses on the application of artificial intelligence (AI) in relation to energy consumption and efficiency. They believe that in order to accelerate an inclusive energy transition, both the greening of IT (making AI more energy-efficient) and green by IT (using AI to improve energy efficiency) are essential. This approach recognises the potential of AI to contribute positively to the transformation of the energy sector.

In terms of ethics, the coalition adopts a guidance ethics approach to address ethical issues associated with the implementation of AI. They recognise that ethics play a crucial role in the responsible use of AI technologies. To understand these ethical dimensions, public participation is considered crucial. This ensures that the perspectives and concerns of all stakeholders are taken into account, resulting in more informed and ethical decision-making.

Hannah Boute, a proponent of the guidance ethics approach, advocates for its use in evaluating the effects of technology in specific contexts. In the case of AI applied to the energy transition, stakeholders work together to identify both the positive and negative effects of the technology. These identified effects and the underlying values form the basis for designing, implementing, and using the technology. This approach ensures that AI aligns with the values and goals of the energy transition.

The coalition also recognises the importance of sustainability in the design of AI. They have a working group dedicated to developing principles for green software. By integrating sustainability into the design process, they aim to create AI systems that are environmentally friendly and contribute to responsible consumption and production.

International cooperation and input are highly valued by the speakers. They appreciate the contribution and input from an international audience, highlighting the importance of collaboration and partnerships in addressing global challenges. This signifies the coalition’s commitment to engaging with a broad range of stakeholders and leveraging diverse perspectives to drive sustainable digitisation forward.

Overall, the Dutch National Coalition for Sustainable Digitisation emphasises the potential of AI to support sustainable development, while also emphasising the importance of ethics, public participation, and sustainability in its implementation. They recognise that responsible and ethical AI development is crucial for achieving the goals of the energy transition and ensuring a sustainable future.

Session transcript

Jessie:
Thank you. I think we’re ready to start. And we would start our session with a video, I believe. That’s always technical issues. Hi, my name is Jessie, and I am your virtual presenter. I would like to welcome you all at this open forum, Accelerating an Inclusive Energy Transition. Thank you all for coming. Luckily, I will not be your real host today. While artificial intelligence offers many opportunities for innovation, nothing beats a real-life moderator. Give a warm welcome to Alisa Hever, MAG member and senior policy officer at the Dutch Ministry of Economic Affairs and Climate Policy.

Alisa Heaver:
So good afternoon, everyone. Welcome to our session. I’ve already been introduced by AI, but I’m really pleased that we don’t have an AI moderator, because probably everybody’s name would be mispronounced then, because my name is actuallyAlisa Heaver, not Hever. We will be talking about sustainability this afternoon, and AI and the energy transition. Today, this morning in the Dutch newspaper, we could read that AI might require as much energy as the Dutch economy requires now, if generated AI will continue to grow as fast as it’s growing now. I believe it was 2027, but I’m not the expert on this topic. We thankfully do have a few experts here in the room and online as well. First of all, we have Hannah Boute. She is from the ECP, which is a platform for coalitions in the Netherlands. She will do an explanation on the guidance of ethics approach and chances of digitalization in the energy transition. She will also do an explanation about Mentimeter, because we want to make this session very interactive, as I already said. Thereafter, we will have a brief presentation from Neil York. He is from the Technical University in Delft, and he’s part of the Dutch Coalition on AI. Thereafter, we will have Tim Vermeulen. He is a strategy board member at Alliander, that is a Dutch energy network operator. He’s part of the Dutch National Coalition for Sustainable Digitalization. Last but definitely not least, we have Chantarapeach Ut. She is from the Organizing Committee of the Youth Internet Governance Forum in Cambodia. We’ve done the introductions, and we’ll go over to Hannah.

Hannah Boute:
Thank you very much, Alisa, and thank you all here today in Kyoto and online for attending this session. My name is Hannah. I’m working for ACP, Platform for the Information Society, where we organize public-private cooperation. I’m involved in two of those public-private cooperation projects. The first is the Dutch National Coalition for Sustainable Digitization, where I’m the secretary, and I’m also involved at the guidance ethics approach, where I’m moderator. Both projects will contribute to this session today on accelerating an inclusive energy transition. First, I will briefly tell you a bit more about the Coalition for Sustainable Digitization, and then I’ll tell you how we will explore the ethical dimensions of the energy transition with help of the guidance ethics approach. The Dutch National Coalition for Sustainable Digitization is a coalition where we work with stakeholders from the quadruple helix, meaning the government, business industries and SMEs, civil society, and universities on the greening of IT, but we also look on greening by IT. In this session, we will actually dive into both, because we will look at how we can accelerate the energy transition with artificial intelligence, but like Alisa also just mentioned, it’s also very important to look at how we can incorporate sustainability into the design of AI. As with every technology, ethics are in play, so it’s important to explore the ethical dimensions. To do that, we need your participation today. I’m going to explain how we’re going to do that. Throughout the presentations, we ask you to listen carefully and identify implicit and also explicit values that are mentioned. I will help you a little bit with how you can identify implicit values with help of the guidance ethics approach, which I will explain briefly in a minute. After my contribution, Doreen will help you to log in into Mentimeter, so keep your phones ready for that. Then the values we identify together will be input for the panel discussion, so we can see how we can sustain them in the European, but also in the Asian perspective on the energy transition. Guidance ethics approach, what’s that? This is an ethical method that actually looks at the effects of a technology in a very specific context. It’s not high over, but it looks at that specific context. Those effects can be positive, they can be negative, and they are identified together with the people and organizations involved with the application of a technology in that specific context. Behind effects, you can find values, and I’m going to show you an example in a second to make that more concrete. If we’ve identified values that are relevant in a specific context, we can actually use them as a starting point to design the technology, to implement the technology in the specific environment we’re talking about, but also in how we use the technology in that environment as a human being. I think I’m missing some slides, actually. That’s okay. I can explain it. I just… Let me see. Yeah. Okay. Can you get my slides? All right. I’m just going to talk you through it. I think that’s going to be fine. The guidance ethics approach was developed from out ACP with the Technical University of Twente, several stakeholders from government, civil society, and business industry. The approach always starts with a technology in a specific context. Today, the technology is artificial intelligence. In the context of the European energy transition and the context of the Asian energy transition. Certain effects can be found in that and you will hear those effects mentioned by the speakers telling you about the transitions today. We can use those values to actually design the technology in such a way that we sustain those values. Oh, there we are. The first stage, technology in context, we ask you to participate to identify the values and then our panel will use the values to look at how we sustain them in the technology, the environment, and as how we use the technology. I’m going to give you a quick example. For instance, hypothetically speaking, if we apply AI on the energy grids to match supply and demand, and there is not enough supply to fulfill all demand, choices have to be made. So, AI doesn’t prioritize in the sense of what is needed, for example, for health. So, a negative effect might be that a crucial asset, for example, a hospital, is left without power. So, behind that effect, we’re actually talking about the value of social responsibility. So, then the question in stage three of this method is how we can design a technology, the implementation of the technology, and the use of it to sustain this value. I hope that’s clear for everyone now. So, short recap, grab your phone, log into Mentimeter, listen carefully to the upcoming speakers if you hear values and enter them into Mentimeter. So, at the end of the contributions of the other speakers, we will have a look at those values you entered, and then we’ll ask them how they are planning or already sustaining the values mentioned by you. Thank you so much.

Alisa Heaver:
If you can’t scan the QR code, then you can also go to menti.com and use the code, oh, use the code, the code, go back to the slide, yes. You can use the code 45732451.

Hannah Boute:
So, the question for you is not to think of values already, but to listen if you hear values mentioned by our speakers. As I explained, sometimes you can find values behind certain effects. Like I just gave the example, an effect might be that an AI is not able to prioritize an asset in the way we as a human would do that. So, that means that the value of social responsibility can be found behind that effect. But don’t worry if you can’t identify that, just concrete values are great as well. Does it answer your question? Cool.

Alisa Heaver:
Okay. Yes? Okay. So, the code, so you can go to menti.com and the code is 45732451. You’re welcome. I hope we’re ready to go to our first presentation or, well, yeah, real presentation from Neil York. Is he with us?

Neil Yorke-Smith:
Yeah. Can you guys hear me?

Alisa Heaver:
Yes. Perfect. Well, no, I don’t see you. I do see your slides, though, but it would be lovely to see your face as well. I can hear you and see you and I’m broadcasting my video. Ah, yes. Now we see you.

Neil Yorke-Smith:
Oh, perfect. All right. Let me see if I can share this.

Alisa Heaver:
But now we don’t, okay, oh, yeah, now we see both your slides and we will see you when you’re talking. Okay. Good luck. Okay. Talk talk. Let’s see if this works.

Neil Yorke-Smith:
Well, hello. Good afternoon, everybody. Or good morning from the Netherlands. It’s nice to be here and to talk with you. So, I’m going to talk about the inclusive energy transition and the role of AI. So, I’ll give a few thoughts and a few examples about it. And as was said, I’m from the Delft University of Technology, where I sit in the computer science department. Let’s see. Great. So, I’d like to put out that energy transition is seen, at least in the Netherlands, as the defining challenge of this generation. There’s perhaps many reasons for this, not least the Netherlands is a low-lying country and feels climate change quite profoundly. And AI is increasingly a part of the energy system. It’s already there. We’re already using AI in different ways. And perhaps Tim, when he gives his contribution next, he’ll also say more things about this. So, I thought it useful to say something about what is AI. AI is perhaps such a broad term and people have different views on it. So, here’s my one slide on, at least from a computer science perspective, as someone who works in AI, what are we talking about when we say AI and then its use in the energy system? So, we have two axes here. I don’t know if you can see my cursor. The top is thinking and the bottom is acting. And the left is humanly and the right is rationally. So, we could do it something like this. You could say, well, AI is, we want to think like humans. We want to have something that’s conscious, you might say. That’s one view of AI, thinking like humans, becoming human-like in that way. Another view is, well, let’s not maybe think like humans, but we want to act like humans. So, to have human-level capability in many areas. So, somehow, I said, be strong AI. Other people will say, well, AI, it’s about being aware of others, the kind of theory of mind. This is more philosophical, perhaps more from a philosophical perspective. And the fourth view that people have is, well, it’s about acting rationally, not acting like a human necessarily, but acting in a way that kind of makes sense. So, you might say this is more like the weak AI view. So, AI kind of is a tool with excellent abilities in certain domains. That’s what the view I’ll be talking about here. So, it’s not about becoming human-like or even acting necessarily like a human, but acting rationally in a given situation. So, this is called the intelligent agent view of AI. Here’s a picture, actually some work from TU Delft, of the control room of the future. And you see there’s no, you know, acting like humans here, but it’s AI helping here with grid management. And again, I think Tim might say some more things about this. So, technology and AI, where they’re already being used, they’re already being used as part of the energy system, today I’d like to say they’re only part of the solution. AI is not the whole solution to the energy transition. And one reason, at least, is that the societal impacts of an AI energy system will be crucial, but we don’t know them yet. We don’t know all of the impacts of AI and energy on society. And here’s a picture from Unsplash, some residents, perhaps, of Amsterdam. And the reason is that the ethical, the legal, the social, the economic aspects, they need to be studied along with the technological aspects. It’s not just, okay, we can put this machine learning into this system here. Well, should we do it? If we did it, how does it affect the regulations? Should the regulations change? How does it affect people? How do they feel about it? What’s the broad implications, not just the purely technical solution? And here’s another nice picture of Amsterdam from Unsplash. So, what we’d want to say is that questions like, well, values, we just heard about the ethical guidance approach, things like values, trust, justice, fairness, these questions, these considerations are as important as efficiency, as technical sustainability, and so on. These non-technical factors. And at least in the view in the Netherlands, we’re still emerging our emphasis, our research on these kind of questions as well. But it is there, it is coming. And so, a good example is a data center here. Okay, we can build a large data center. Well, how does it affect the people around it? How does that affect our societal priorities? What about the poor people in the city who don’t have enough energy? So on and so forth. So, we do see that AI, which are the sales of data and algorithms, it can bring benefits. For example, in forecasting, demand and supply loads, and in system design, efficiency of the operation of the system, real-time balancing, demand response, when we have lots of sustainable generation, flexible pricing, we can do markets in new ways. So, I do see benefits or potential benefits as we transition away from fossil-based fuels. But at the same time, these non-technical, ethical, legal, social questions, I think, are worthy of our consideration. And here’s a quote from the head of energy of the World Economic Forum. And I think that the key aspect here is the principles that help us think about how we govern and design and use responsibly AI in the energy system. And let me mention three principles, perhaps. The first is the trustworthiness of AI. And I think that’s something that we need to think about. Three principles, perhaps. The first is the trustworthiness of the technology. So, trustworthiness, of course, in many levels. But some aspect is this notion of this meaningful control, the citizens have some say in how their data is used, how the system will be designed, the notion of some kind of collaboration between the human and the AI. So, the whole notion is around trustworthiness. It’s a large area, of course, in its own. The second value, then, this notion of justice and justice in the energy system in particular. So, you know, do I benefit more from AI if I have an electric vehicle? But of course, not everybody can afford an electric vehicle. So questions around justice, energy justice in society, even in an affluent country like the Netherlands. And the third principle, perhaps this notion of fairness. So, okay, there’s a market, there’s new types of markets. There’s the sellers, there’s buyers, there’s prosumers of energy. How can we design the market so that it’s efficient, that it’s effective, but also that it takes on board notions of fairness? So these are three principles perhaps to think about as we increasingly use AI in the energy system. Now, I’ve mentioned, of course, the Dutch context, and here’s a map from Wikipedia, just and only to show you where the Netherlands is, and to say it’s this little area here. What else is in the European context? And I think here, Tim will help us. And of course, what’s more broadly in a non-European context? So the question then is, what can the Netherlands learn from other countries as we talk about AI, as we talk about an inclusive energy transition? And perhaps are there things from the Netherlands which will also be useful in other contexts? It’s an open question perhaps for our discussion later. Perhaps one example of this, I think Nigeria, and one area where perhaps the Netherlands can learn is this use of sharing resources together. I just leave that on the floor perhaps for later discussion. So we’ve been talking about our inclusive energy transition, the potential of AI, how AI is already being used, and some of the questions perhaps around values and the non-technology side of it. I’m curious to hear what we will discuss together later. Thank you very much.

Alisa Heaver:
Thank you very much, Neil, for this informative presentation. I just wanna go to the audience. Is there any quick question that someone wants to ask as a follow-up on this presentation? If not, then we will directly go to Tim Vermeulen. No? Okay, then we’re set to go to Tim. The floor is yours. Tim, we cannot hear you as of now.

Tim Vermeulen:
And now you can?

Alisa Heaver:
Yes, now we can. Yeah, and we can see you.

Tim Vermeulen:
Perfect, thanks. That’s a big bonus. So hi, everyone. My name is Tim Vermeulen. I’ve already been introduced, head of digital strategy and architecture for a grid operator in the Netherlands. And I will try to give you an EU perspective on the energy transition. It’s a very broad topic, and I’ll try to dive into a few specifics that will help us in a few cases here and dive a bit more into how technology is actually already impacting the landscape so far from a European perspective. Just have six slides, but one of the first slides here is the energy mix in the EU in Europe is changing. So we see more renewables coming onto the market. I’m saying nothing new here, but the energy mix is changing, and that is changing relatively rapidly from a system perspective. So we’ve done the same thing over decades before where we have central generation of electricity and local use of generation. And basically we’re mixing that whole thing up. So from a one-way street, we’re changing the entire energy landscape into a two-way street where everyone and every consumer can also be a prosumer. So you can produce energy and you can use energy. And that’s quite impactful and also leads to a whole new opportunities on jobs, of course. So everyone has a more easy way to contribute to energy transition, which is lovely. And also the next speaker is gonna talk about how people can more easily act in the energy transition from this perspective. We see whole new business models popping up. So a whole new part of possibilities showing up, but also the question of how do these new markets actually become inclusive and how is technology playing a vastly different role in making sure that we can manage this difference energy mix and landscape. And that’s quite relevant because our landscape in Europe is hyper-connected. So if you see, these are the electricity lines and you can see the synchronization zones in parts of Europe in the different colors. But we have a super and hyper-connected grid on a international scale from a European perspective, but also a lot of impact on local scale. So locally also is connected as well. And we’re now focusing a bit on electricity, but there are a lot of energy carriers out there that play a role in the mix and making sure we can meet supply and demand in Europe. But I’ll give some examples on how technology based on this changing energy landscape has to play a role. So with increased use of electricity and people and different energy carriers, we need technology to help us plan how we’re gonna change the grid. And this is a picture of part of the Netherlands. And usually when we expand our grid and make sure that we build new substations, so where the high voltage lines are converted into mid voltage lines. So they go into neighborhoods and so on in the ground for in parts of the Netherlands. We used to do this all by hand and with people making a lots of asset management plans on this location needs to be here so we can meet the supply and demand here. But nowadays we’re asking algorithms to help us with this process. We’re asking algorithms, okay, so if this is changing here, we get a heat network in this city, this is changing in the energy mix for this part of the city, where should the new substation be? And this is very interesting because we’re asking AI, we’re asking algorithms where we should build our substations. And now the fundamental question we already have in parts of Europe is, okay, so how is bias introduced into these decisions? And to give you a very practical example, if we need to decide where we should lay thicker cables or to provide more electricity, if you ask an AI based on all the data from the Netherlands and different neighborhoods, they will say, please do that in the richest areas of the Netherlands because the chance of them buying solar panels or the chance of them buying electric cars is just the highest. So we should lay thicker cables in the richer neighborhoods so we can provide them with electricity and access to the grid more easily so they don’t run into capacity issues. And that’s fundamentally interesting from a technology perspective that an AI says that, but now the question comes to shove. If we’re doing it this way, we’re only increasing capacity in the richer neighborhoods and definitely not in the neighborhoods with less money to spend on EVs and solar panels, and you get less access to the grid for those other areas. So, and that is what AI is introducing and actually what we’re already running into. So trying to remove that bias, trying to look at fairness from a grid perspective point of view has become very, very important. And this is just one example where we’re trying to see where should the next electricity stations be, but there’s gonna be AI all across the board and from data collection to forecasting to active congestion management. So one of the things in Europe, which is really coming up now is the fact that we’ve laid cables into the ground or on the poles above the ground for decades. And of course, as I mentioned before, we haven’t predicted the use of those cables in a two-way street, but only in a one-way, generation to the consumers, instead of consumers all using their solar panels, which is really in a huge uptake in the AU and pushing it back. And that means that the cables are used heavily and we need to manage them so we don’t go over the capacity and we have faulted cables and disruptions. So we need AI across the board to address this situation. But that, again, with the example from before, that means that could be bias in all of the steps of how we manage electricity, the energy mix in the Netherlands. So that requires a lot of attention and a lot of collaboration. So what you see in the European energy landscape is that over the past decades, every energy company, grid company was very much focused on their own operations, but nowadays they have to open up what they’re doing to learn from each other, to share how they can actively use algorithms, but also actively fight this bias. And a great example, you can see the products here already, and that’s my last slide, is that people are using open source more and more, even for core grid capabilities, which was unthinkable 20 years ago, because that was all very proprietary. You have to protect this. But if we all are in this path and managing this complex world, this complex energy mix, we need all different kinds of, and these are all open source products with all grid forecasting capabilities and so on in the Linux Energy Foundation, but there are more open source foundations there. But you see this opening up, and that is something we couldn’t have predicted 20 years ago, and is definitely going on to further technology use and also learn from each other on how we battle this bias in algorithms, but we need to use them anyways to manage the grid effectively. That’s my talk so far.

Alisa Heaver:
Thank you. Thank you, Tim. Also perfectly on time. So thank you for being concise on that. Is there anyone who has a particular question? You can stand up to the mic and ask your question, and please introduce yourself.

Audience:
Thank you. My name is Mojtaba Rezakar. Actually, yeah, it works, yeah. Actually, I am MP from Iran’s parliament. It’s an interesting subject that you are discussing here, but you can look at it from different views. First of all, are we going to distribute the, I mean, are we going to consider furnace of energy distribution, or we are going to consider furnace in all subjects? Are we going to talk about just one subject or one subject? Let’s ask it in another way. Are we going to use same technology in all over the world or not? You are talking about the furnace of energy distribution. So if different parts of the world are using different technologies, how do you come up with a solution for this subject? Let’s say we are using a car. Some places are using gas. Some places are using electricity. Even when you use gas, there are different technologies that use different amount of fuel for the car. So if you are using different technologies, and probably it has other effects, are we going to consider those items or not? Let’s say, are we going to consider sameness of technology or not? Are we going to consider the impact of other things? Let’s say the environment, when you use different technologies, perhaps it impacts on the environment as well. Thank you.

Alisa Heaver:
Thank you. I’ve heard a lot of questions. Tim, did you catch them sufficiently?

Tim Vermeulen:
Mostly. Okay. Mostly, and maybe we can use some of them for the further discussion at the end. Yes, yes. But to give a short response, and the subject on fairness, just on one side of distribution, of course, makes no sense. So it’s fairness across the board. But also if you see everyone’s using different technologies, not only in different parts of the world, but even within countries, even within companies at some points, we use different technologies. So the question begs is, can we look at everything from a modular point of view, and where every part is trying to assess fairness in their own way, but also look at the system as a whole? And this is the challenge for this integral approach. But the question still remains, we need to look at fairness across the board. So not only from an energy distribution perspective, but also if we want to cut CO2 emissions in the Netherlands and in Europe and globally, that also needs to work together. And every country have different challenges and different energy mix. For the Netherlands, we use a lot of natural gas because we have the natural gas bubble in our own country, which we could use. So that gives a whole different perspective on how and what changes we make in terms of CO2 and emissions than the country next to us. But as long as we’re as transparent as possible within our technology and be able to make it as modular as possible that we can interact with the rest, well, that’s the name of the game, but also there’s no definite solution. So that’s why we need to collaborate and work in an open way together to get there step-by-step.

Alisa Heaver:
Thank you, Tim. Well, then I’m gonna go next to me, Peach, sorry. Well, up to you, the floor is yours. Do you want the mic?

Chantarapeach Ut:
Hello, everyone’s on site and online. I’m sorry, I’m feeling a bit under the weather. So if you hear a little bit of voice problem, I’m very sorry in advance. I’m very sorry in advance. So my name is Chantarapeach Ut. One of the organizing committee from Cambodia Youth Internet Governance Forums, which was host and held last month. And I’m also currently pursuing a green job as a space and sustainable operation officer under Impact Hapnumping. And I hope to still pursue green job in the future as well. And today I’ll be talking about how we can unlock Asian’s green energy future through youth. So why is green energy is important? So according to the IPCC report that I went through, it highlights that the increasing global CO2 emissions is at a pre-descended levels in which it leads us to need, it needs to be energy transition need to be fastened. As nine out of 10 Asian country currently have set net zero target, but still many of them are one of the most vulnerable to climate change, in which among these nations, 650 million people resigned it. So the need for energy transition is very needed to supply the high rise in energy demands in which we face a lot of challenges while doing that for green energy transitions in Asian context. Because as you know, each country has different infrastructures and each nation has their own respective energy structure and system and also resources to supply those energy. So transitioning might require a huge amount of financing, especially to the developing country. So they might need a lot of funds or support in technology transfer or knowledge improvements from the neighboring country or the developing country, in the regions or outside the region as well. And also, while changing might be needed, changes will always affect things. So it might affect the existing jobs, if the existing economic opportunity as well, which need time to adapt. So which is why it might take a lot of time to do so, but I’ll raise a few case on how youths can contribute to this development. I will focus on youths instead. So first of all, green energy. So as you see on the slide, these are some technology that are currently being invented by youths. It could include green energy engineering, smart agriculture, renewable energy optimizations, air quality monitoring, which is green buildings, climate modeling, eco-friendly transportation. These are all the technology that are currently being developed in a very early stage. So yes, these are the youth-led technology that needed support to be turned into reality and for their innovation and creativeness to blossom. So when you support them, you create entrepreneurship mindsets in our society in order to push our youth forward even further. So investment in youth and their potential to our society is very crucial at this point. And moreover, they can also contribute by pursuing green jobs. They can advocate for green tax or start recycling their e-waste or the previous generation waste, yeah? And some of them might become a green AI researcher or sustainability data analyst, renewable energy engineer, clean tech researcher. So these are all the type of green job that youths can pursue in the future and need everyone’s support in raising awareness on these green jobs. So when you increase the value of green job, more youths will start to realize that job that they are pursuing are actually making impact to their society and it can either be positive or negative. And some people might just work to make end mids, but some people are working to actually make an impact. So I want to start making or raising awareness for youth to realize that they can also make an impact while also earning money to sustain their life, yeah. And after green job, we also need to start raising awareness for them to pursue green job because for them to do that, they also need a lot of exposure. For in my contact in Cambodia, I major in international relations and economic science. I was not aware of green jobs at all until I started working at Impact Hub, which I applied for the role without knowing it is a green job. So this is how clueless I am in terms of a youth. So this is my youth perspective. This is why I want us all here to start emphasizing the importance of green jobs. So now let’s move to the next one, which is youth green internet initiative. So aside from green job, we can also start making platform or events that raise those kinds of awareness. For instance, I am part of Cambodian youth internet governance forums. This is the local internet governance forum, which are hosted around the world. And it is the The Internet Government Forum was held for the first time in Cambodia, which is very surprising when other countries have had it for so long. So, this shows how slow we are in terms of technology and AI transitioning. So, yes, we need a lot more even like this in the regions, especially also in Cambodia and in other states. Yeah, and more work for youths to actually voice out their opinions and voice out their concerns. And they can actually go to the local conference of youth, which is quite held around in each nation. And it is held under UNGO, the Office of Children and Youth Constituency, which is under the United Nations Frameworks of Convention on Climate Change. And when you get in one state to actually come up with a statement, those statements are then forward to the Conference of Youths, which is the regional one. And once it’s concluded in the Conference of Youth, it will also become part of Conference of Party, COP28, which is going to be held on 13 November until 12 December this year. So, yeah, this is a platform in which youths can actually learn and also become aware of the internet security while also raising about sustainability and the environment. And for this case, Alcoa is also being held for the first time in Cambodia as well, seeing this is very slow for us. Everything is the first time. So, yeah, I will try my best to raise awareness on this. And I’ll try to raise about AI cybersecurity. So, since youths need a lot of knowledge on it because they are the ones who currently use a lot of internet. And I believe harnessing a defense line on cybersecurity is one of the most important things for them. And by making them become part of this conference will allow them the opportunity to be part of the decision-making process as well. And it’s a space for them to voice their perspective, concern, and idea related to digital policy on online rights and while advocating for climate change. So, yes, I’ll share you what our neighbor country, actually, they already held the Alcoa this year. And this is their statement from the events. They want to call for Vietnamese youth to send relevant party to the COP28 conference. So, I actually applied for this as well. And I hope I get selected to be part of it, to share about AI as well. And from Cambodia YIGF, we also got youth testimonial in which I learned that local IGF is very important as it’s equipped with knowledge on internet governance and its impact in the world that we’re living in. So, I sincerely hope that I raise the important views through my presentation. And I hope you will be more involved and interested. Yes, I hope that sustainability field will be more acknowledged and we could fast-track the inclusive energy transition in the future. Yes, that’s all for me.

Alisa Heaver:
Thank you. Is there anyone who has a particular question for Peach? No? Okay. I just wanted to say, besides all the initiatives that you have mentioned, even though those are all youth initiatives, there is a very good international initiative as well that is, well, for youth and a bit older people. And that’s called the Coalition for Digital Environmental Sustainability. And that’s… Oh, I see someone standing up for a question.

Audience:
I actually don’t have a question for Peach specifically, but probably the question for every speaker. So, actually, my name is Wan Kwan. I am from the software development industry. And I believe in clean code. I believe that clean code will consume less energy than the bad codes. And I believe that when AI comes, it will generate… It will consume a lot of energy. So, do you guys think about that? Talk about it in the policy-making process as well? Because AI can help us achieve in the other areas, but it also provides challenges as well. So, I hope you get my question.

Hannah Boute:
Thank you. I kind of almost feel that this is a value in coding, but I might be wrong on this. But if you would allow me to only ask Hannah at the moment, so we can then move on to the second part of the presentation of Hannah. Thank you so much. That’s definitely a value. We already discussed today incorporating sustainability in the design of artificial intelligence. And within the Coalition for Sustainable Digitization, we also have a working group that’s looking at principles for green software. But I’m pretty sure that our expert on artificial intelligence, Neil, will get back on that in a second during our panel discussion. So, I suggest we move on to that one.

Alisa Heaver:
Yeah, that’s why I also gave the floor to you, because the next part is also up to you, Hannah.

Hannah Boute:
All right. So, I think we gathered some values. There they are. And so, I would like to ask Tim and Peach and Neil to have a look at those values. I think we leave them on the screen, right? Yeah. All right. So, then I’m going to turn to Tim first. So, I see some values, sustainability, fairness, energy, justice, trustworthiness. From the European perspective, Tim, which values do you consider most relevant? And what is currently being done or can be done to sustain these values in the energy transition from the European perspective?

Tim Vermeulen:
Yeah, I think, and I will address the question of clean code we can get to later. But if you look at the center, I think we see a few of them that are definitely very prevalent from a European perspective already, or at least from my perspective, for sustainability and fairness. And maybe integrity is one of the values which I’d also want to see. Because also, access to energy is a right. It’s something we need to protect. It’s something we need to make sure that everyone has that access. So, integrity as a system as a whole. I don’t care if you’re a grid operator or energy producer or a prosumer in the world. Integrity of the entire system is something we need to protect in order to foster sustainability and to protect the fairness of the system. So, that is something that I would want to add to the core of the values.

Hannah Boute:
All right, thank you so much. Piet, then I’d like to turn to you. So, looking at these values, which one do you consider most relevant from the Asian perspective? And how are you currently sustaining those values from your perspective?

Chantarapeach Ut:
For me, in the context of Asia, green energy transition, I believe that harnessing renewable energy more efficiently and storing it more effectively would be the value that I think Asians should focus on. As currently, we are using fossil fuel and most of the region are related in terms of sharing those energy. And as everyone may know, currently, there’s a little bit turbulence with Ukraine and Russia. So, that’s also tip off the energy sharing within our region as well. So, I think Asians should focus more on renewable energy in order for them to supply and sustain their energy in the future as well. Yes, thank you.

Alisa Heaver:
Thank you so much. Then, final panel question to Neil. And Neal, after answering this question, I’d also would like to ask you to go into the question about energy efficiency coding within AI. But first to the panel question. So, yeah, we heard the replies of Tim and Peach with regards to relevant values and how to sustain them from both their perspectives. So, from the perspective on how AI can help to realize an inclusive energy transition, how do you think that both perspectives could straighten each other to sustain the named values?

Neil Yorke-Smith:
Yeah, I think it’s an important question. So, I guess one route of technology is that the values are more of the kind of the capitalism or the more the marketplace. So, we want to deliver some value to some people. And so, we develop technology to do this. Then we want to sell this and so on. A second way of perhaps designing technology is what’s called value sensitive design, which is we think about what are the values of the potential customers, but more broadly of society, and how do we incorporate those values in the design process? So, value sensitive design. And I think this can be one way to sustain values which certain stakeholders, society might think are important. And linked with this, I think there’s also the notion of that there’s some sensitivity to how values can change. So, particularly in longer term decisions. So, for example, Tim mentioned infrastructure decisions. You know, if we’re making decisions now, which will be with us for 20, 30, 40 years, then not only what are today’s values, but also potentially how might those values evolve in the future. So, this is a hard question. This is not my speciality. But I think it’s to recognize that some of the decisions we make now have future consequences, and at least to be aware of that. Then turning to the second question, particularly at the values of efficiency. So, if I understood what the question was about, it’s when we have code, we have algorithms, we have AI systems, having more efficient systems, so better designed code, cleaner code, how can that contribute also to questions about sustainability? And I agree that is a value. In fact, just yesterday, I was talking with some people about, you know, you’re doing a tech startup, you’re bringing this new AI technology. What are the values? Is it indeed, you know, time to market, disruption, potential profit, and so on? Or is some of the values, maybe we go slower when we’re developing our systems, we’re implementing our code, because we take on this non-functional requirement of the efficiency of, let’s say, the AI algorithm. And to me, this is interesting, because I don’t know to what extent people think about this. So, I don’t know, you know, you download an app onto your phone, and the app is, you know, 500 megabytes. But actually, maybe it could only be 100 megabytes if the developers took more time and, you know, focused on the size and the efficiency of things. So, I think it’s an important question to raise.

Tim Vermeulen:
And maybe to add, I also think that the work Peach is doing, the awareness part of looking at how you impact on a societal level, I think that’s definitely a part still not always taken into account into developing algorithms across the world. I can see that already in my company, if we build something, we want to build it to have functional impact. And then we also have to ask ourselves, is the code clean enough? How does it run efficiently? And to look at that from a board perspective, whatever you’re doing, so every job can be a clean job, to put it in the phrases you used earlier. I think, so there’s work in awareness, I saw awareness on the screen as well, is still something we need to work on in the broader sense of the word. And not only people working in energy, but working in any sector who’s building any kind of application, because you’re impacting the energy transition in one way or another anyways. So, awareness is probably a big thing there.

Alisa Heaver:
Yes, it’s done. Yes, thank you, Tim and Neil, for your contributions on answering those questions. I am really pleased with the amount of people who have handed in a few words. I just want to recognize that, first of all. Is there anyone else who has a particular question on the presentations given? No? Do you have anything more to add on the word cloud?

Hannah Boute:
I think it’s very valuable input, since we’re here today with an international audience, and I’m really pleased to see your input on values, on a very important topic that is of international importance. So, I just want to emphasize that, yeah, that kind of input is something we need. And international cooperation on these kind of topics is very important. So, thank you so much for your input.

Alisa Heaver:
Yes, well, if there aren’t any other questions anymore, I would like to ask Tim if he had, well, maybe let’s say it like this. If any of the speakers want to have any closing remarks, I would want to give them one minute each for key takeaways.

Chantarapeach Ut:
Okay. So, one minute. Okay, I’ll try and make the most of it. So, yeah, I just hope that everyone here got to learn a lot from the sessions, and I just hope that I make an impact as a youth. And I’m just here on behalf of my team, and I sincerely hope to see a more inclusive energy transition in the future. And in order to do that, not only youth will be the important stakeholder, but also the adults and the people who are on the high up as well that need to help in giving us direction and shaping it for us to actually follow, and also to help them and support them in the future. And I sincerely hope that everyone would give us more opportunity to take part in this kind of event in order to learn and also to improve our knowledge on this as well. Thank you.

Alisa Heaver:
Thank you, Peach. Tim, your final remarks.

Tim Vermeulen:
Yeah, so as I mentioned in my story, I see a tremendous force to opening up everything we’re doing from a technology perspective, but also from a complexity perspective. And that’s not a bad thing. That means that we can all more easily contribute to what we’re doing here, whether it’s open source, whether it’s sharing values in different areas to foster the energy transition. So seeing all that open up and seeing where technology is also forcing us, I think is a big opportunity for everyone to make sure that we’re on the right value track and do value-based design, as Neil said. I’m hopeful for the future where we are managing this landscape, not only in Europe, but in the entire world and sharing that knowledge. So that’s my last two cents.

Alisa Heaver:
Thanks, Tim. Neil, last but not least.

Neil Yorke-Smith:
Yes, thank you. Yeah, I think a value that’s also in the discussion is the notion of accountability. So accountability of AI systems, accountability of those who develop them, accountability more generally in society towards the energy transition. And perhaps to add also, as Tim said, there is a global perspective on this. And so are also European countries accountable to other parts of the world? Because we’re interdependent. And I hope we have things to learn from each other, but also to strive together towards the energy transition.

Alisa Heaver:
Thank you, Neil. I would then want to thank all the speakers, all the tech team here for ensuring that this session went really, really smooth. And I particularly want to thank the speakers for some of them for waking up really early in the morning. Because they are in a quite different time zone. And yesterday in the main hall, I asked the panel there on the GDC, if they could ensure that there would be a bit more about sustainability in the GDC. Because it was very, very little what’s been mentioned in the policy brief of the Secretary General, no, sorry, from the tech envoy. And in that main hall, that’s where the Kyoto Protocol was signed. Or the final negotiations took place. And I think we’re in this incredible building here. And we should think about the future in that sense. And I think it’s wonderful that we’re having more and more discussions about sustainability and digitalization and making that combination. And I think we had a great session here on accelerating an inclusive energy transition. So with these final notes, I want to thank you all for attending this session. And please feel free to chatter around and exchange on information. Because I think that’s where the most interesting discussions come from. Thanks. Thank you. Thank you.

Alisa Heaver

Speech speed

142 words per minute

Speech length

1249 words

Speech time

526 secs

Audience

Speech speed

151 words per minute

Speech length

382 words

Speech time

152 secs

Chantarapeach Ut

Speech speed

170 words per minute

Speech length

1770 words

Speech time

624 secs

Hannah Boute

Speech speed

147 words per minute

Speech length

1423 words

Speech time

580 secs

Jessie

Speech speed

75 words per minute

Speech length

112 words

Speech time

90 secs

Neil Yorke-Smith

Speech speed

188 words per minute

Speech length

2078 words

Speech time

663 secs

Tim Vermeulen

Speech speed

195 words per minute

Speech length

2280 words

Speech time

702 secs

Achieving the SDGs through secure digital transformation | IGF 2023 Open Forum #92

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Yasmine Idrissi

The analysis reveals several key points about cybersecurity. Firstly, there is a pressing need to demystify the field and dispel the misunderstanding that it is solely a technical issue. It is important for actors, including development professionals and policymakers, to understand that cybersecurity is not just a technical problem, but also a consumer and policy issue. By broadening the perception of cybersecurity, it becomes more accessible and relatable to a wider audience.

The analysis also highlights the need for inclusion and diversity within the field of cybersecurity. Currently, cybersecurity is predominantly English-focused, which excludes other languages and dialects. To promote inclusivity, it is crucial to reflect and incorporate other languages and both national and local dialects in the field. This ensures that people from diverse backgrounds can fully engage with and contribute to cybersecurity.

Furthermore, the analysis suggests that non-traditional actors, such as political parties and civil society, should be included in shaping cybersecurity policies. On a national level, there can often be interagency friction between mandates, and involving these non-traditional actors can help to bridge the gap and ensure comprehensive and effective policies. By broadening the participation and perspectives in cybersecurity policy discussions, a more holistic and inclusive approach can be achieved.

The integration of cybersecurity into digital development projects is another crucial aspect. The approach to digital development and cybersecurity has often been kept separate within organizations, resulting in a siloed approach. By integrating cybersecurity into digital development projects, organizations can ensure that the security of digital systems and infrastructure is prioritised from the outset. This can be achieved by incorporating cybersecurity as a criterion in audits for development projects.

Donor-funded projects also have a role to play in integrating cybersecurity requirements. By building cybersecurity requirements into their projects, donors can contribute to the overall security and resilience of the projects they fund. This includes considering cybersecurity as an integral part of the project design and implementation process.

Additionally, the analysis suggests that cybersecurity can benefit from incorporating lessons from other fields, such as climate change. Both fields involve technical complexities that can be intimidating for policymakers and diplomats. By learning from the approaches and strategies used in climate change negotiations, cybersecurity can adopt a similar mindset of collaboration, knowledge sharing, and multidisciplinary thinking.

In conclusion, the analysis highlights the need to demystify cybersecurity, promote inclusion and diversity, involve non-traditional actors in shaping policies, integrate cybersecurity into digital development projects, and learn from other fields. These measures will help create a more comprehensive and effective approach to cybersecurity, ensuring safety, progress, and resilience in the digital world.

Allan S. Cabanlong

The ASEAN region is currently facing disruptions and ransomware issues as it strives to progress in digital development, highlighting the essential need for robust cybersecurity governance. The digital age has brought about unprecedented risks and vulnerabilities, necessitating ASEAN countries to address these growing threats effectively.

Interdisciplinary leadership plays a vital role in achieving a secure digital landscape and digital transformation. Based on the experiences of ASEAN, it is observed that leaders often lack interdisciplinary knowledge and expertise, which hinders effective digital governance. To govern digital development successfully, leaders should have a comprehensive understanding of all aspects of cybersecurity and its intersection with digital advancements.

Furthermore, the absence of proper cybersecurity governance exposes organizations and governments to significant risks, potentially resulting in catastrophic consequences. It is essential to establish clear policies, frameworks, and regulations to safeguard against cyber threats and protect sensitive information. Implementing robust cybersecurity governance measures enables organizations and governments to mitigate risks and ensure the security of their digital infrastructure.

In summary, the ASEAN region faces disruptions and ransomware challenges in its pursuit of digital development, highlighting the need for strong cybersecurity governance. Leadership with interdisciplinary knowledge is crucial for achieving a secure digital landscape and digital transformation. Neglecting cybersecurity governance can expose organizations and governments to severe consequences. Therefore, taking proactive measures to establish comprehensive cybersecurity governance is vital for the safety and stability of digital ecosystems.

Audience

The discussion highlighted the importance of budgeting and planning for the development of critical information infrastructure. A civil servant from the Sri Lankan government, involved in the formulation of the Sustainable Development Goals (SDGs), emphasised the significance of this aspect in achieving sustainable development. Sri Lanka has already taken steps in this direction by adopting a cybersecurity strategy and developing a cybersecurity policy.

The integration of policies and strategies for information infrastructure and cybersecurity into standard organisational structures and periodic development projects was proposed as a key step. This integration is crucial for the successful implementation of SDG 11 (Sustainable Cities and Communities) and SDG 17 (Partnerships for the Goals). By integrating these priorities into existing structures and projects, a more effective and streamlined approach can be taken to address information infrastructure and cybersecurity challenges. This will promote the development of sustainable cities and communities and foster partnerships for achieving the SDGs.

The supporting evidence for these proposals includes Sri Lanka’s existing adoption of a cybersecurity strategy and the development of a cybersecurity policy. These initiatives demonstrate the country’s commitment to addressing the challenges posed by information infrastructure and cybersecurity. Comprehensive policies and strategies help Sri Lanka tackle these issues in a more systematic and holistic manner.

Overall, the discussion took a neutral sentiment, with an emphasis on the practical importance of budgeting and planning. This suggests a pragmatic approach to addressing information infrastructure and cybersecurity challenges, highlighting the need for careful consideration and foresight in resource allocation and strategic decision-making.

In conclusion, the discussion highlights the crucial role of budgeting and planning in the development of critical information infrastructure. Sri Lanka’s efforts in adopting a cybersecurity strategy and policy serve as positive examples. To successfully implement the SDGs, it is essential to integrate policies and strategies relating to information infrastructure and cybersecurity into standard organisational structures and periodic development projects. By doing so, Sri Lanka aims to achieve sustainable cities and communities while fostering partnerships for the SDGs.

Moctar Yedali

The analysis highlights several important points regarding cybersecurity challenges in Africa and the need for greater attention and inclusive approaches. Firstly, while many African countries have digital transformation strategies, cybersecurity is not sufficiently integrated within them. This is a concerning issue as cybersecurity is crucial for protecting digital assets and ensuring the safety and integrity of digital infrastructure. The responsibility for addressing cybersecurity primarily falls upon ministers in charge of digital transformation and security/defense, with limited involvement from other stakeholders. This raises concerns about a lack of multi-stakeholder participation in cybersecurity discussions and decision-making processes.

In addition, there is a significant lack of efficient cybersecurity strategies in many African countries. This poses a significant risk as cyber threats continue to evolve and become more sophisticated. Without effective strategies in place, African countries may be vulnerable to cyber attacks that can have detrimental impacts on their economies, infrastructure, and overall stability.

On a positive note, the analysis suggests that African youths have the potential to play a critical role in addressing cybersecurity challenges. With 35% of Africa’s population being young, there is a sizable pool of talent that can be trained to become cyber guardians. By providing appropriate education and training, young people can contribute to safeguarding digital spaces in Africa and beyond.

Furthermore, the analysis stresses the importance of Africa not merely being a consumer of cybersecurity products but creating its own ecosystem for cybersecurity. By fostering domestic innovation and collaboration, Africa can establish itself as a hub for cybersecurity solutions, ultimately enhancing its resilience and capabilities in the face of cyber threats.

Moreover, the analysis highlights the insights shared by Moctar Yedali regarding the rapidly changing nature of technology and its implications. He emphasizes the need for continual capacity building to keep pace with technological advancements. Yedali warns about the potential of an impending digital divide, where consumers may have to choose between different systems or technologies. This could lead to a “cold technical war” among more influential countries, while smaller countries follow without much choice.

In conclusion, the analysis sheds light on the unique cybersecurity challenges faced by Africa and highlights the need for more attention and inclusive measures to address them. It calls for the inclusion of multi-stakeholders in cybersecurity discussions, the development of efficient cybersecurity strategies, the training of African youths as cyber guardians, and the creation of a robust ecosystem for cybersecurity in Africa. Additionally, it underscores the importance of continual capacity building and technological cooperation to bridge the digital divide and ensure socio-economic progress.

Johan Eckerholt

The COVID-19 pandemic has highlighted our heavy reliance on digital methods of communication and governance, revealing the critical importance of trust and security in these processes. As our everyday lives become increasingly digitalized, it becomes essential to ensure the integrity and safety of our digital systems.

Global cooperation plays a crucial role in achieving sustainable digital transformation. Digital issues transcend national borders, making collaborative efforts necessary to address them effectively. Partnerships between governments, the industry, international organizations, and civil society are key to tackling digital challenges.

The growth and development of small and medium enterprises (SMEs) rely on a broad and secure digital system. Secure digital processes that enable cross-border transactions are crucial for the success of industries. Ensuring the safety of digital transactions fosters the growth and expansion of SMEs.

Finding the right balance between regulation and governance is critical for the growth of the digital economy. The involvement of organizations like the International Telecommunication Union (ITU) and industry leaders is vital. The ITU monitors digital activities, while the industry provides the necessary technological foundations. A collaborative approach can facilitate digital progress and innovation.

To build trust in digitalization, common rules, effective implementation tools, robust monitoring mechanisms, and resources for remediation are essential. Clearly defined and universally agreed-upon rules, comprehensive implementation tools, rigorous monitoring processes, and adequate resources can instill confidence in digital systems.

Cybersecurity is an integral part of our digital lives. It is crucial to integrate cybersecurity measures into digital systems to ensure a safe and secure online environment. Protecting personal data, financial transactions, and sensitive information is of utmost importance.

Improving the link between the defense, economic, and development communities is a challenge that needs to be addressed. Strengthening connections and fostering collaborative efforts between these communities is essential to tackle global issues and achieve sustainable economic growth while reducing inequalities.

A consortium project is currently underway, aiming to provide guidance through consultation. This project includes a consultation in Singapore and aims to produce relevant guidance by December. The consortium brings together expertise and perspectives to address key digital challenges.

Johan Eckerholt, a participant in the project, acknowledges the value of prior discussions and plans to incorporate the points discussed into future project proceedings. This demonstrates their openness to feedback and commitment to improving the project based on valuable insights.

In conclusion, the COVID-19 pandemic has underscored the significance of trust and security in digital communication and governance. Sustainable digital transformation requires global cooperation, a secure digital ecosystem for SMEs, a balanced approach to regulation and governance, common rules and tools, integrated cybersecurity measures, improved collaboration between different communities, and consortium-led guidance initiatives. Through collaborative efforts, we can build a safe, secure, and prosperous digital future.

Patryk Pawlak

There is a clear confusion on the ground regarding the differences and intersections between the terms ‘digital’ and ‘cyber’. Patryk Pawlak’s experience in European Union (EU) projects revealed this confusion, highlighting the need for clarification on these terms and how they integrate.

Furthermore, Pawlak emphasized the importance of mainstreaming in the context of understanding ‘digital’ and ‘cyber’. Mainstreaming refers to incorporating these concepts into various aspects of project implementation. EU engagements have demonstrated that mainstreaming can be a solution to the challenges faced on the ground in relation to digital and cyber projects.

The enabling environment is often overlooked in cyber capacity building, as stated by Pawlak. In his work on operational guidance on cyber capacity building for the European Commission, he identified the enabling environment as a key issue. This highlights the need to consider the broader context within which capacity building initiatives take place.

Pawlak’s involvement in the generation of operational guidance and strategic directions for cyber capacity building for the European Commission reflects the importance placed on considering different aspects of cybersecurity in the development of projects. This highlights the need for comprehensive and strategic approaches to cybersecurity development.

Delegates are faced with a dilemma when it comes to dealing with blockchain and cybersecurity. A colleague in the delegation was tasked with implementing a project on blockchain in the justice system, but also needed to incorporate cybersecurity measures. This highlights the challenges that arise when these two complex and distinct areas intersect.

It is evident that expertise in both blockchain and cybersecurity is needed to aid delegates in addressing these challenges. The colleagues in the delegation mentioned by Pawlak were not experts in either of these fields. Therefore, the involvement of experts becomes crucial in order to navigate the complexities and ensure the effective implementation of projects.

In conclusion, the analysis highlights the confusion surrounding the terms ‘digital’ and ‘cyber’, the importance of mainstreaming in project implementation, the often overlooked enabling environment in cyber capacity building, and the need for expertise to address the challenges posed by the intersection of blockchain and cybersecurity. These insights emphasize the need for clear definitions, comprehensive approaches, and the involvement of knowledgeable experts in the field.

Christopher Painter

There is a significant divide between the development and cybersecurity communities, as the development community tends to perceive cybersecurity as too technical and defensive. However, it is argued that cybersecurity is actually a foundational element of development, with almost every development project having a cybersecurity aspect.

One of the main challenges is the fear of crossing committees in the UN negotiation process. Countries view cybersecurity capacity building as a military thing, rather than as an area suitable for official development assistance. This perception contributes to the segregation between different communities in development and cybersecurity.

To address this divide, a conference is being held in Ghana. This conference aims to bring together the development community and the cybersecurity community, and is co-organized by global organizations, including the World Bank, World Economic Forum, and the Cyber Peace Institute. The conference’s objective is to build understanding and champion the integration of cybersecurity in development.

It is argued that there is a need for interaction and communication between the development and cybersecurity communities. The cybersecurity community also needs to improve its communication with the development community. The division between the two communities is seen as a barrier that hinders effective collaboration and response to cybersecurity threats.

Furthermore, it is highlighted that combining diverse sectors and breaking down barriers is essential to understanding and effectively responding to cybersecurity threats. Issues in cyberspace require the contribution of different sectors, including security, human rights, and economics, in order to handle them effectively. This approach emphasises the importance of collaboration and integration across various fields.

Notably, there are also instances where organisations and regions misunderstand their roles and responsibilities regarding cybersecurity and digital matters. For example, an unnamed country did not attend International Telecommunication Union (ITU) meetings because they viewed it as solely related to telecommunications, despite it covering broader areas such as cybersecurity. This misunderstanding underscores the need for clarity and coordination in understanding the scope and responsibilities of different entities in addressing cybersecurity challenges.

In a positive development, some countries have institutionalised the merger of digital and cybersecurity roles. This practice involves integrating various aspects of the digital realm, with the role of the cyber ambassador aligned with that of the digital ambassador. This integration aims to create a more comprehensive and coordinated approach to dealing with digital and cybersecurity matters.

In summary, there is a clear divide between the development and cybersecurity communities, with the development community perceiving cybersecurity as too technical and defensive. However, it is argued that cybersecurity is a foundational element of development, and projects in the development field often include a cybersecurity aspect. The conference in Ghana is a significant effort to bring the two communities together and improve understanding and collaboration. It is crucial for both communities to interact, communicate effectively, and integrate diverse sectors to effectively respond to cybersecurity threats.

Michael Karimian

In order to achieve the Sustainable Development Goals (SDGs), it is necessary to focus on two key areas: secure digital transformation and collaboration among various stakeholders. Secure, trusted, and inclusive digital infrastructure is fundamental for economic and social development. This requires integrating cybersecurity principles into the digital development agenda. By doing so, societies can be safeguarded and potential risks can be mitigated.

Collaboration among different stakeholders is also important. Active participation from governments, international organisations, industry players, and civil society is crucial for a successful multi-stakeholder approach. In order to address the complex challenges associated with digital transformation, it is necessary to bring together the expertise and resources of different actors. By working together, synergies can be created and comprehensive solutions can be developed to tackle cybersecurity issues effectively.

Furthermore, there is a need to mainstream cybersecurity into digital development programs and broaden the funding sources for cybersecurity capacity building. It is imperative to seamlessly integrate cybersecurity considerations into the design and implementation of digital and development initiatives. By prioritising cybersecurity from the outset, potential vulnerabilities can be identified and addressed proactively. Additionally, expanding funding sources for cybersecurity capacity building can ensure that the necessary resources are available to build robust and resilient digital systems.

Another important aspect highlighted is the importance of conducting real assessments of cyber needs, feasibility, and impacts in development projects. This involves evaluating the cybersecurity requirements and implications of digital initiatives. By conducting thorough assessments, potential risks can be identified, and appropriate measures can be taken to enhance security and mitigate threats. For instance, in the digitisation of court systems, assessments can help identify the cybersecurity measures needed to protect sensitive data and ensure the integrity of the judicial process.

Additionally, it is crucial to view cybersecurity as an investment rather than simply a cost. Cybersecurity should not be seen as an expense but as a strategic investment that can yield long-term benefits. By investing in robust cybersecurity measures, organisations can protect their data, systems, and users from cyber threats. This investment can lead to increased trust, business resilience, and economic growth in the digital era.

In conclusion, achieving the SDGs requires a focus on both secure digital transformation and collaboration among various stakeholders. By integrating cybersecurity principles, adopting a multi-stakeholder approach, mainstreaming cybersecurity in development programmes, conducting thorough assessments, and viewing cybersecurity as an investment, societies can build secure, resilient, and inclusive digital ecosystems that foster sustainable development.

Tereza Horejsova

The Global Forum on Cyber Expertise (GFC) has emphasized the importance of incorporating cybersecurity into development initiatives. It has been observed that cybersecurity is often disregarded due to a lack of understanding on how to integrate it into other development interventions. Recognizing this disconnect, the GFC aims to initiate discussions to mainstream cybersecurity in the development agenda. All partners involved in the forum understand the connection between sustainable digital transformation and cybersecurity, highlighting the need for a comprehensive approach.

To address this, a multi-stakeholder approach is deemed essential in formulating a comprehensive cybersecurity plan. The Government of Sweden, Ministry of Foreign Affairs, Global Forum on Cyber Expertise (GFC), International Telecommunications Union (ITU), and Microsoft are partnering in this initiative. They plan to bring in various stakeholders to contribute to the discussions. By involving a diverse range of perspectives, expertise, and resources, a more holistic cybersecurity strategy can be developed.

A specific plan has been laid out for a series of workshops that will focus on discussing various aspects of cybersecurity and its role in digital transformation. These discussions aim to explore the importance of digital development for achieving the Sustainable Development Goals (SDGs), learn from past and ongoing cyber capacity projects, implement UN cyber norms, and consider the role of diplomacy. The intention behind these workshops is to gather insights and formulate suitable cybersecurity strategies that align with the broader development agenda.

The Global Forum on Cyber Expertise operates under the Swedish Government’s initiative and seeks to incorporate the feedback received during these discussions in a multi-stakeholder compendium. By engaging stakeholders from different sectors and countries, it aims to foster collaboration and ensure that cybersecurity remains an integral part of development efforts.

Moreover, the division and misunderstanding between the development and cybersecurity communities are acknowledged and seen as a challenge. To address this, the Forum encourages communication and interaction between these two communities. By bringing them together and facilitating a shared understanding, it aims to bridge the gap and move towards common goals. This alignment is considered essential, as both communities have a role to play in achieving sustainable development.

In addition to engaging stakeholders, the Global Forum on Cyber Expertise also emphasizes the need for audience participation and involvement. It appeals to the audience to share their experiences, concerns, and challenges regarding cybersecurity and development. This approach seeks to collect a wide range of perspectives and ensure that the discussions take into account the diverse needs and experiences of different stakeholders. The Global Conference on Cyber Capacity Building (GC3B) is highlighted as an opportunity to further enrich these conversations, and expectations are set for its outcome to contribute to the overall understanding and progress in bridging the gap between cybersecurity and development.

In conclusion, the Global Forum on Cyber Expertise recognizes the importance of incorporating cybersecurity into development initiatives. It advocates for a multi-stakeholder approach to formulate a comprehensive cybersecurity plan and has outlined a series of workshops to discuss various aspects of cybersecurity in relation to digital transformation and the SDGs. By improving communication and engaging with stakeholders and the audience, the Forum aims to bridge the divide between the development and cybersecurity communities, fostering collaboration, and achieving better outcomes in sustainable development efforts.

Session transcript

Tereza Horejsova:
to the issue of achieving the sustainable development goals through secure digital transformation. This open forum is organized by the government of Sweden. So thank you for joining us here and thank you to those of us joining us online. What we will do today is to kind of discuss how the issues of cybersecurity can be mainstreamed in the development agenda. It’s a topic that is very close to many of the partners of this project that we would be introducing here today and that we hope to get inputs from you on. So just to recap, this project on mainstreaming cybersecurity in development has the following partners. It’s the government of Sweden, as I have mentioned, it’s Ministry of Foreign Affairs, it’s the Global Forum on Cyber Expertise, the GFC, it’s the International Telecommunications Union, the ITU, and last but not least, Microsoft. So welcome on behalf of all of the partners in this consortium. To set us off maybe just a few general remarks. So cybersecurity is often decoupled from other development interventions due to lack of awareness, understanding of how to integrate it, or dual use technology concerns. However, what all of the partners and as we have understood also many other around these issues have understood that sustainable digital transformation and cybersecurity, there are some vital cross-cutting needs there. And for this reason, today, we are formally launching this work stream that we have been working on together to facilitate a frank and inclusive discussion among stakeholders and distill their recommendations into a multi-stakeholder compendium that we are planning to launch later this year. And we will share more details on later. So to this end, we really plan to bring various stakeholders to a series of workshops. One of them is considered as this one happening at the IGF. And just to give you a little teaser of the issues that we are planning to discuss. It’s issues such as the role of cybersecurity in supporting safe and secure digital transformation, the importance of digital development as an enabling function to achieving the SDGs. What are some of the lessons learned from past and ongoing cyber capacity building projects? How can we use some concrete goals or checklists and indicators for the implementation of UN cyber norms, mainstreaming cyber capacity building with various development programs and funds, and also the role of diplomacy in creating institutions and mandates to support cyber mainstreaming? So these are some of the issues that we will be discussing today. We do hope that we will have a lively discussion that will then be reflected in the compendium in the making. So that was just a very brief introduction of the topic. And now let me introduce also the discussions that we will have to take us through that. So I will start on my left. At the end of the table, we have Christopher Painter, who is the president of the GFC Foundation Board. We continue with Yasmin Idrissi from the International Telecommunications Union. We also have Johan Eckerholt from the Swedish Ministry of Foreign Affairs and Michael Karamanian from Microsoft. My name is Teresa Horejsova, also with the Global Forum on Cyber Expertise, and it will be my pleasure to be your moderator alongside my colleague Alan Sabanlong, who is joining us from the Philippines, who will be serving as a bridge with the online audience. And I do hope also His Excellency Mokhtar Yedali, ex-minister from Mauritania, joining us from there at hopefully 4 a.m. in the morning. I don’t know if we have a confirmation if Mr. Mokhtar is with us. He is. So very good. Good morning, Mokhtar. I really appreciate you being here. So let’s get started. Each of our panelists will give some brief reflections, and then we will go into the discussion. And if you allow me, I would like to start with you, Johan. There is a reason why the government of Sweden has considered this issue of importance. So could you maybe kick us off on how you see the importance of digital development as an enabling function for the SDGs? Thank you.

Johan Eckerholt:
Thank you, Teresa, and also thanks to all the partners from NGC, FE, and ITU, and Microsoft here today. I think from the Swedish government perspective, we have had a long development journey from a poor agro-country to a relatively industrialized and digitalized country. And I think that journey has been able to do with trust. I think that is where we’re coming from. And what we have felt and seen, and I think we’ve all seen it even more so with the COVID and the pandemic, that core parts of our everyday life and our governance functions are digitalized. So creating trust and security in how we share information, how we communicate, how we deal with this across borders have become an essential part on a functioning society, on a functioning economy, on a functioning development. So from our point of view, that is why we see a very strong link with the social SDGs and building cybersecurity. Because if you take an issue for like information or disinformation, for example, it’s about understanding, believing, trusting in the sources that you have. Those things are key when we look at it. And I think also for an industry to grow, we today buy and sell things across borders. If I want to sell something in another country, I would like to be sure that the thing I’m buying is what I’m ordering. I’d also want to make sure that my credit card or whatever I used to pay is not skimmed along the way. So in that, I think we have a very key issue. And I think especially for small and medium sized enterprises in developing countries, being able to have that security and that broadness around the system is key for having the opportunity to grow and to develop. And I think that for us are very, very key things. And I think when we are looking at it, I think one needs to raise the awareness of cybersecurity because it might sound technical, but it is essential to allowing the other aspects of this trust building to get to the SDGs. And I think I’m very extra happy that we are all of us here together because I think in order to achieve that trust, we need to work together. I mean, we as governments need the help of the industry with Microsoft.We need to cooperate together in international organization and we need a civil society and the experts like you, Teresa, to work on taking this forward. And we cannot do it alone. And I think digital is one of those things, we all know it. It doesn’t stop at borders. So we need to do this together. I think I’ll stop here a little bit. Maybe I can touch a little bit on how we see it. I think, what does that mean? I think, well, it means that we need to have common rules. We need to have the tool for implementation. We need to have the tool for monitoring. And in the end, we will probably also have the resources to remedy when things go wrong. So I think you need all those aspects. And I think, I mean, ITU is doing an excellent job on the monitoring part. I mean, we all try to help in providing our input from a national perspective so that we can see what is needed. We can do that gap analysis. And I mean, from GCFE, I mean, you’re producing the knowledge base that we need to get there. And the industry is a key partner because they are actually providing the fundamentals of which we work through. And we as governments try to find the right balance between regulation and governance and getting that right. So I think those are the things that we are working on. And also, to be very frank, also struggling with because I think that the challenge of doing this is of course that it needs that cooperation among us, but also in governments and to get all the different key partners to talk together. So I think those are a little bit how we see the basic points of why this is important and why we need to work together in this setup. So I think what we are looking for is to learn from you and how we can do this better. And I’m very, very much interested in hearing your views on this. Thank you.

Tereza Horejsova:
Thank you very much, Johan. Also, for stressing kind of the multi-stakeholder importance of these discussions. There is also no coincidence why this consortium of this project has been set up the way it has been. And you know, later on, if our time allows, having you as a diplomat in Geneva, it would be really useful also to hear some reflections from you on how the development and cyber security worlds actually meet at this center of diplomacy. But let’s leave it for later. As I have mentioned in the opening remarks, we have had two consultations already with various contexts at various venues. The first one was happening in July in New York during the open-ended working group. The second one was held about two weeks ago virtually in cooperation with also the Swedish International Development Agency, SIDA, on really trying to bring more of the development practitioners into this discourse. And we have learned quite a lot. Some, let’s say the concerns or recommendations that we have heard in these consultations were expected. Some were maybe a little bit surprising, yes. So maybe to also set us up for the discussion later, I would like to get into those briefly. And Michael, if I may turn to you, you know, so what has come up as the main barriers in mainstreaming cyber security and development so far?

Michael Karimian:
Sure thing, thank you, Teresa. Being a moderator and of course your hard work in preparing today’s session. And Yasmin and I will reflect on this together and I think Yasmin and I have very similar takeaways from the consultations, which in a way speaks to a level of unanimity, both from an international organization actor and a private sector actor, the strength of the consultations and the common themes that are coming through time and again. Firstly, and Johan touched upon this, it is abundantly clear that secure digital transformation is absolutely essential in our pursuit of the SDGs. And that’s true in ways which weren’t fully recognized when the SDGs were being drafted and scoped and agreed to well over a decade ago. And now as we’re kind of on the cusp of the post 2030 agenda and not quite knowing what that will look like, that in many ways speaks to the timeliness of this project and helping to lay the groundwork for that. But even before we get to the post 2030 agenda, right now in today’s interconnected world, it’s obvious that secure, trusted and inclusive digital infrastructure is the very foundation of economic and social development everywhere actually, not just low and middle income countries. However, that digital transformation journey also brings with it a huge range of inherent cybersecurity risks, particularly for nations and regions that may currently lack the necessary cyber resilience to counteract the ever evolving cyber threats. So it’s imperative to recognize that in order to empower and safeguard all societies from the mountain cybersecurity risks, we must of course proactively and comprehensively integrate cybersecurity principles into the digital development agenda. One recurrent theme which has clearly resonated throughout the discussions is the need for a collaborative and inclusive approach as just exactly as Johan said, achieving the SDGs through secure digital transformation requires the active and engaged participation of various stakeholders and that includes governments, international organizations, the development community, industry players and civil society, which necessitates the pooling of knowledge, expertise, resources, because of course the magnitude of these challenges is just simply too vast to be tackled by one entity in isolation. I know that’s a very common theme which we hear throughout the IGF. I think furthermore, the consultations have really underscored that capacity building in cybersecurity is not merely a desirable option, but an absolute imperative and in some ways that’ll be music to Chris’s ears, but something that he already knows and understands very, very well. It’s evident that we need to prioritize efforts in building and enhancing the capacity to manage and mitigate cybersecurity risks, especially in regions, again, where digital transformation is occurring at a really accelerated pace. Strengthening the skills and the knowledge required to navigate the intricate web of cybersecurity challenges is fundamental to the achievement of sustainable development goals. Additionally, the discussions so far have highlighted the importance of mainstream and cybersecurity into digital development programs and funding mechanisms. So to ensure that the SDGs are not only supported, but indeed advanced by digital transformation, we must seamlessly integrate cybersecurity considerations into the very fabric of digital and development initiatives. And that includes both at the initial design level, but also ongoing implementation of these projects. And we’ve learned that the approach to mainstreaming cybersecurity must be adaptable and context specific. Of course, there’s no one size fits all solution. So every region and country faces a range of unique challenges and requirements on their digital development journey. And as such, strategies and interventions must be tailored to address those specific contexts effectively. Lastly, but certainly not least, on the issue of funding mechanisms, that’s clearly an issue that has come to the forefront. The consultations have illuminated the critical need to broaden the sources of funding for cybersecurity capacity building. It’s not sufficient to solely rely on defense budgets, for example, to support these endeavors, is imperative that development budgets are also mobilized to ensure that digital development projects are fortified with the necessary cybersecurity components. So aligning funded mechanisms with the specific needs and objectives of digital development initiatives is essential too. I think these are just some of the valuable insights from the consultation so far. Yasmin, I think we’ll have similar perspectives and maybe more to share.

Yasmine Idrissi :
Thank you so much, Michael. And thank you, Teresa. So indeed, these consultations have also lifted the lid on some things that as cybersecurity practitioners you don’t necessarily consider. There’s also a definition issue sometimes that we often try as a recommendation to refer to it as cyber resilience, because security as a word implies certain things, of course, and this also is partly caused by the fact that cyber capacity building, as mentioned by Michael, is often funded from defense budgets rather than development. So one recommendation also that has been coming since the two workshops that we’ve had is to demystify the field, because actors, be it development or policy, they often misunderstand it. They see it only as a technical issue or not a consumer issue or policy issue. So there’s a bit of a discomfort from development professionals over the perceived sort of technical nature of it. And one other thing that is important that we often overlook is that it’s a very English-focused field. There needs to be a little bit more inclusion also in terms of languages, and both national and also local dialects need to be reflected. A recommendation that is important as well is to consider going beyond our usual sort of communities, both at national and international level. The development community and the cyber capacity building community often do not talk. And also at national level, there is often an interagency friction between mandates, and that’s the case for numerous countries. So oftentimes we finish into an eco-chamber of speaking to cyber diplomats that obviously know the importance of this, but there needs to be, of course, the inclusion also of what we would consider non-traditional actors, like political parties or civil society, of course, and people that are also active in shaping this policy landscape. Yeah, so including cybersecurity and cyber resilience is, of course, key to also include in digital development projects. So I can say that even in the ITU, the approach is still very siloed. Oftentimes we have digital development projects as one thing and then cybersecurity projects as separate, even within the same organization. So I wouldn’t imagine elsewhere. So maybe a recommendation that has come out and that I very much agree with is that sometimes cybersecurity can be added as a sort of criteria in audits for development projects, digital development projects, and maybe donors can have a role. here where they can build cybersecurity requirements into their projects. So it’s a bit of a all hands on deck type of effort. But I think what’s really key is to continue to have these conversations and maybe turn over to different actors, and maybe some that we haven’t thought about for understanding what can be some good examples that we can showcase through this work stream. Thank you.

Tereza Horejsova:
Thank you very much, Yasmin. Thank you, of course, also Michael. I will be curious later to hear if this, to any extent, was surprising to you as well, and if you have other reflections. But before we do, Michael, you have mentioned, for instance, the unique challenges that there are various regions face. Yasmin has brought up the issue also of languages. I would like to turn to you, Mokhtar. You, of course, knowing Africa so well, can you share with us a little bit your perspectives of worthy intersection of digital resilience and achieving sustainable development goals, how it has unfolded in Africa, and what is the situation there regarding the multi-stakeholder participation that we’ve heard about as quite an important need. So 4 a.m. for you, good morning, but we hope you’re fresh and ready to share something with us.

Moctar Yedali:
Indeed. Can you hear me very well now? Yes, we can. Do you see me? So far, you are small, but I believe you will be made big. Yes, it’s perfect. Please go ahead. Thank you very much, and thank you for the opportunity, and good day to you, because it’s morning, it’s 4 a.m. here where I am in Mauritania. And thank you for having me, and I congratulate the previous speakers for what they have contributed with, specifically with regard to the connection between SDGs and the cybersecurity and the ICTs in general. As it has mentioned previously, today our lives cannot go without really using the digital technologies for our development, and this affects a lot of our goals with regard to the sustainable development. As previously mentioned, the issue of safe transactions and the safe use of technology for development is extremely important. We have seen in the world the rise of cyber attacks, cyber threats here and there. And as mentioned also, it is extremely important to see that cybersecurity is being addressed not only from security or defense point of view, but it is addressed from really safety and development, and specifically sustainable development. Most of the African countries do have now their digital transformation strategies, but very few of them do not connect the digital transformation and the cybersecurity within those approaches. Second, most of the African countries or actually departments dealing with the digital transformation are most of the time addressing the issue of digital transformation in silo, which though there is a multi-stakeholder approach everywhere, but the problem, the only point where the collaboration is not yet there is that national governance with regard to cybersecurity here. Most of the African countries are lacking that collaboration among different stakeholder. The issue of cybersecurity is addressed by probably the ministers in charge of digital transformation, the ICTs, and the ministers in charge of security slash defense, but other civil society, academia, and others as a multi-stakeholder are very seldom being associated or involved in this endeavor related to cybersecurity. So that lack of national governance with regard to cybersecurity slash connected with the SDGs and connected with the digital transformation for how can I say, global approach for everybody to work on together and make sure that as the representatives of Sweden, I say, have said that we are all moving toward safety, not only within the national borders, but also outside of our board decision. So the point here I wanted actually to highlight is the fact that the multi-stakeholders principle doesn’t apply most of the time in the area of cybersecurity. That is one of the number one. Number two, there is a lack of cyber strategy at the national level. And you will see that most of the African countries do not have really a very, how can I say, efficient, if I may say, cybersecurity strategy associated with development of digital transformation and making together. There is an illusion of safety with regard to buying their firewalls or antiviruses or whatever. And we think that is really the cyber safety, but in fact, it’s not. It’s just what I call generally the illusion of safety. And the third one point I wanted to highlight is that Africa has unique specificity of having a lot of young people. Most of our, 35 of our population is actually very young. And these people, if are very well used and trained, they can be the cyber guardian, not only for Africa itself, but also for all of us. Because I said, the issue of cybersecurity is not only within the national borders, but also everybody. And our performance within that space or the cyberspace is just by the weakest link. So, and Africa should not be really the weakest link in all that. So this, I stop here is just as an introduction, but bottom line, it is extremely important that the multistakeholder principles be applicable also within the framework of cybersecurity. That is the main things. And Africa not should be a net consumer of products that are being manufactured here and there, but also should create its own ecosystem in terms of human capacities, in terms of cyber industry and cybersecurity industry as well. I stop here and I’ll be glad to answer some specific question. Over to you, Tania.

Tereza Horejsova:
Mokhtar, thank you very much. You’ve been quite critical about the situation in Africa when regards to this topic, but rest assured that we have experienced these challenges, be it on the siloed approach or maybe not as efficient multistakeholder participation in other parts of the world. And that’s also why we want to discuss it here today. So thank you. Turning now to you, Chris, the Global Forum on Cyber Expertise has, let’s say, evaluated this topic of cyber and development intersection as quite vital or challenging, to the extent that it has decided alongside its partners to organize a major conference on global cyber capacity building that will be happening at the end of November in Accra, Ghana. So first of all, like, why the angle also for the GFC and how do you expect these issues to be tackled in the discussions there?

Christopher Painter:
Thank you, Teresa. And thank every other speaker for their comments. I agree with what was said before, certainly. And I think it stems from what we’ve heard from the other panelists, that there is this divide between the traditional development community and the cybersecurity capacity building community, much like there’s a divide between the traditional economic and innovation community and the security community on a larger scale. And I think it’s partly misperception that they think of cybersecurity as too technical, but a lot of development projects are technical. But on the other hand, I think they think it’s a defense thing, it’s a security thing. That’s why, you know, for the conference we’re having in Ghana, it’s to bolster cyber resilience. And the term was chosen specifically to address both of those communities. It’s something that resonates with both rather than using the term cybersecurity, which has resonance within this community, but maybe not that much resonance within the development community. And we’ve seen this play out in a lot of different venues. You know, a lot of countries think that cybersecurity capacity building is not, as they say, ODA-able, because they think it’s a military thing. It’s not, but that’s the perception sometimes. We see, even in the negotiations in the UN, I remember in the last, the first WWG, we wanted to get some language in saying that cybersecurity undergirded the UN development goals, which indeed they do. They may not mention cybersecurity as a separate goal or even digitization as a separate goal, but they undergird many of those goals and they’re foundational to it. But there was a little bit of fear that, oh, the development goals are the province of not the first committee, but the second committee. And it’s like, that’s, you know, that’s kind of crazy when you think about it because we’re all in the same world and we all have to deal with these same issues. So the conference in Ghana, and I should emphasize it’s not just a conference for Africa. It is being held in Africa. There’ll be, you know, significant African presence and participation, but as a global conference, it’s for really all over the world. And really one of his chief goals is to bring together that traditional development community and the cyber community to, as others have said on this panel, to mainstream cyber capacity building as a foundational element of development. Now we’ve seen some organizations, one of the co-organizers of this conference is the World Bank. We have the World Economic Forum, the World Bank, the Cyber Peace Institute and us, GFC. We also have a steering committee of a number of countries and organizations, including Microsoft. And so there’s this understanding that we need to mainstream this. We bring these communities together and we’ve seen the World Bank, USAID, the British Development Agency, I think has been on the front foot in the last few years in trying to do this kind of integration, but it’s still rare. And that has implications in terms of, you know, you were saying, for instance, that digitization development projects obviously have a cybersecurity angle, but really almost every development project does, whether it’s water, power, financial systems, almost anything foundational thing you can think of, cybersecurity is important. So we wanna bring these together. We wanna build more understanding. We want to have something, an outcome coming out of this conference that is action-oriented, that really champions this integration and really brings it forward. This is a, you know, it’s a conference, it’s an important marker, but it’s really a process going after that to continue to make sure that people, that we bring these communities together because it will make us both stronger. It will help the development community because if development projects go wrong because they don’t have good cybersecurity, that hurts everyone. And it will help the cybersecurity community because it opens, as others have said, more resources, more access, and more mainstreaming. So we’re really looking forward to the conference. It’s a big undertaking, but I think it’ll be well worth it. And as I said, it’s really the beginning or maybe the midpoint of a process rather than the end of a process. It’s something which I think we’re gonna have to all persevere and continue to do.

Tereza Horejsova:
No, thank you very much, Chris. So yes, what I like that you also presented it, that if these two worlds interact a little bit more, that it should ultimately be win-win. It should be win for both the development community and the cybersecurity community.

Christopher Painter:
I don’t wanna be perceived as saying that, hey, those development guys don’t know what they’re doing and they need to embrace us. We’re not good at talking to the development community either, the division lays on both sides. And so the idea of bringing us together is for both of us to move toward each other. And I think that’s important.

Tereza Horejsova:
Thank you, Chris. No, totally. How can we use the development language more? How can the development community use the cybersecurity language more to understand each other better? At this point, I would actually like to turn to you. I know I can recognize some faces in the room. I know many of you have been involved also in various development projects and maybe come across some challenges when it comes to cybersecurity or vice versa. You have been in cyber projects that maybe struggled with linking it to some of the like bigger issue of international development. We really would like to hear from you. And please don’t let us down because this is important. And I hope it’s in everybody’s interest that the compendium we will publish or also any outcomes that will come from the GC3B, the Global Conference on Cyber Capacity Building really make a difference, yes? So if I may ask you to not be shy and come to one of the Microsoft, not Microsoft, microphones. Sorry. I didn’t know you guys bought that. To the microphones around the room and share with us some concerns, it would be excellent. Because if not, you will just be listening to us. Please, Patrick, if I may ask you. So this is Patrick Pawlak, very closely involved in planning of the GC3B. So what do you have to tell us? Thank you.

Patryk Pawlak:
Yes, thank you. I’ll break the ice and hopefully others will follow. So thank you for your presentations. I have a few comments that I think might be interesting as you move forward with thinking about this project. I’ve been involved with the European Union in a few projects that are linked on cyber capacity building. And I think one of them specifically might be relevant, which is the trainings we’ve been doing with colleagues for the officials of the EU delegations around the world in different parts of the world. And that’s specifically focused on cyber capacity building. But one of the big challenges that we have seen while doing those training is that diplomats are also more and more asked to engage not only on cyber projects, but also on digital issues. There is a lot of digital development projects, as you have said. And the challenge on the ground is that actually very often they do not really understand the difference. What is digital, what is cyber, and how do the two come together? So I think as you conceptualize the project, and before you even go to the introductions, it would be probably very useful to explain how the two come together and where the idea of the mainstreaming sort of comes in, because that very often poses a challenge. The second thing maybe I would like to share, and that’s again from another project that I’ve been involved in, is the operational guidance on cyber capacity building that we’ve been working on for the European Commission. And there we have really gone through the process of trying to think exactly how different aspects of cybersecurity can be reflected, taken on board, in actually developing development projects. So here I think the European Commission and the International Partnerships is actually one of the examples of this development agency, if you want, that has actually quite a good understanding of those issues, because that’s the process we have started in 2017. We had the second edition of this operational guidance. But we are touching on a lot of the points that you have flagged, the importance of context, for instance. One of the key issues that we found is important but very often neglected in those discussions is the importance of enabling environment, for instance, when we talk about cyber capacity building. And I think that’s exactly where mainstreaming and digital projects also come in, right? Because we very often say that the two sides of the same coin. And I think it will be important to reflect on those. The operational guidance, I think, is going to be published before your report. So I think it might also be useful, but I’m happy to share sort of a job that we have had, because we actually go in different direction, thinking how cyber capacity building fits. So mainstreaming is one of them. There is the risking approach that we are also looking at. So I think it might be interesting to think of how those different elements come together. But yeah, I’ll stop here.

Tereza Horejsova:
Thank you. Thank you very much, Patrick, for the excellent inputs and breaking the ice as well. I think that Johan is quite well placed to tell us something on kind of the diplomats, you know, and how do you make the difference? Where is the line between cyber and development? Maybe, Chris, then to you on this like general question on what do we mean by mainstreaming cyber and development? And please give me a sign, Michael or Yasmin, if you want to chip in later. Please, Johan. Yes, Patrick? Did I misinterpreted you?

Patryk Pawlak:
No, no, you didn’t, but maybe I’ll give. one example that will make it very, very concrete. So for instance, during one of the trainings, we got very specific question about blockchain, right? So one of the colleagues said, you know, we are asked in the delegation to implement a project on blockchain in the justice system. How does cybersecurity fit? How do we actually approach those topics? And then they say, you know, we are neither experts on blockchain nor on cybersecurity. So how can we actually manage that on the ground? And I think that’s something that would be very interesting to think about as well.

Johan Eckerholt:
Thanks Patrick for a very challenging question. I mean, I can just say that, I mean, I started out on the cybersecurity side and spent many years in Brussels dealing with more the hardcore stuff on the cybersecurity and now I’ve been almost five years in Geneva and that’s the much more the digital side of it. So I can just agree very much with what’s said. I think on the funding side, yeah, from a Swedish point of view, we are struggling a little bit because when I came, I said, but why don’t we fund? And the answer got, well, these are UN specialized agencies. They are not development agencies. So that’s a problem. And I think what you mentioned on getting it together, yeah, it’s a hassle and it’s a challenge. What we have been doing now is we have set up a national security advisor in Sweden and that’s at the Prime Minister’s office. So hopefully that will help us in integrating these things because it is tricky to get this right. But I think what at least we can see from a national perspective and what we feel in Geneva is that everybody understands that digital is a part of everyday life and it touches all aspects. So cybersecurity, or if you mean digital security, needs to be integrated. And I think from a Geneva perspective, I mean, yes, you have the hardcore cybersecurity stuff in the sense that you have Cyber PC Institute working on how to help implement cybersecurity. For example, NGOs working in humanitarian field, they have extremely sensitive data. ICRC suffered a major attack. Also have, so there you have the link directly, but of course you also have the issue of digital security, if you would call it that, in how we deal with standards in the ITU and the ISO. And then of course we have the whole human rights angle, which also has security implications because I think we talk a lot about bridging the digital divide and we all need to do that. But I think the minute we get there, the first thing I want as a parent is that when my kids go online, that it is a safe and secure environment when they do that. And then for me, that we have trust, security, that the basic rights are respected, for me as an individual, very fundamental. And for Sweden it’s fundamental, so that’s why we think these are things. But I mean, getting the link right between the development community and, so to say, perhaps the classical defense community, but I also think the economic community is something that we are struggling with. So I’m not, but we are working on it. I think the best thing we can do is to try to build these kind of networks and try to work together. And I have the privilege of being in Geneva and seeing that, because I mean, I also deal with e-commerce negotiations in WTO. So there we try to get those overall regulations that are key for getting trade flows and digital flows going. And we have language on cryptography, we have language on cybersecurity in there. And yes, they will not be specific, but they will actually create that link to what is needed on more the implementation side. So I think we, I see a closer and closer integration on it. And I think what you have been doing, Patrick, is actually excellent, because we need a lot of capacity building. And I think we all need that capacity building. And I think the importance of what you’re making is that we need a holistic capacity building, because we need to be able to explain what are the pieces, but also how the pieces link up. Thank you.

Tereza Horejsova:
Thank you very much for this honest assessment. Chris, if I may turn to you on the intersection, and you know what we mean by mainstreaming, and then over to Mokhtar.

Christopher Painter:
So I can give you a couple stories. I remember when I was at the White House and working at the National Security Council, we were doing the international, US international strategy for cyberspace. It was the first international strategy on this topic that had been released by any country. But the National Economic Council people said, you can’t call it that, because it’s about cybersecurity. And it’s not about cybersecurity. It actually had elements of economics, it had human rights. In fact, we got a whole bunch of people in a room very much like this together from various agencies in the US government. And they didn’t speak the same language, the human rights folks used one set of words. Internet policy people called the internet, the security people called it cybersecurity. And so just getting those people in a room and meeting and talking together released a strategy that really fulfilled that larger goal, a larger goal with each of their different areas feeding into it. So that was very helpful, just bringing those communities together. I remember going, for instance, to another country, and we’re talking about ITU meetings. And they said, oh, we don’t go to ITU meetings, that’s this other ministry, because that’s telecommunications. But as you know, the ITU does much more than that. And so it’s breaking down barriers within a country, within governments, within governments in the private sector, within the private sector itself, within civil society, I think it’s really changing the way we think about this. And, you know, we are seeing good glimmers of that. For instance, when I was the State Department and started the Cyber Diplomacy Office, we weren’t just about security, we had a human rights component, we had an economic component. Now that’s been institutionalized, you know, in a number of countries, the cyber ambassador is also the digital ambassador. So that’s a good thing. And I think that’s one of the ways practically we need to bring this together. But we have a long way to go to really make that a reality. And that helps us each see opportunities on the other sides that we haven’t seen before.

Tereza Horejsova:
Thank you, Chris. First hand experience from you, and Johan was kind of smiling when you were speaking, because I think you recognized quite a lot also happening in other governments. Mokhtar, over to you, please.

Moctar Yedali:
Thank you, Teresa. And actually, the question raised or the comment raised by Patrick is extremely important. In this part of the world, Africa specifically, you can find a very excellent diplomat, they know exactly about the geopolitics, what is happening here and there. And you have, but they don’t have anything, they don’t know anything about techniques, technology. Specifically, they cannot even, even if they are very interested on that, the rapid pace of technology coming into our lives is actually for non-technical person, it is very hard to follow what’s happening. And second, you can find an excellent technical people who knows technology very well, but have no idea about the geopolitics and what is happening really in the area of diplomacy and so on. So even though they want to really to be part of it, the transformations, the geopolitical transformation today make it also very hard. And I just give an example, we have reached this level of development within the digital space, thanks to collaboration among all of us, technology, industry, we have had this technical and technology cooperation that made us reach the humanity advance. Today, we are seeing something that is different, we are seeing different technologies coming here and there, and the restrictions here and there. And even I may say, we are moving a little more and more to already the digital divide in the sense that we do have different setups and technologies in every area of the world, which bring us, we are coming into the cold technical war among the biggest countries. Hence, the smaller countries are just here following. And I wouldn’t be surprised if in the few times here, you will have to choose, shall I go by this system or that system or that technology or that technology? And you find the consumers that are those who don’t have a technological ecosystem or have the appropriate capacity, find them following, and rather than carrying one or two phones, they will be carrying thousands of them around their belts, and just in order to be connected here and there. So that technological cooperation and that bridge between technical and diplomats, and that continuous capacity building, not only continuous, it’s a permanent kind of thing. This is something that has to be, capacity building is not one time, it’s not a short period, it’s something that is permanently being addressed and needs to be addressed because the technology is so fast, so diverse, so integrated, and so machine oriented and controlled that we need to make sure that we as a human being are there, technically cooperating, and also really pushing for the technological cooperation for the safety of all. I stop here.

Tereza Horejsova:
Thank you, thank you very much. I apologize. Thank you very much, Mokhtar. Yes, I know we have a comment online, then I will go to Yasmin, then to you, Michael, okay? So Alan, Sabanlong would like to give us some perspectives from the ASEAN region, joining us from the Philippines, over to you.

Allan S. Cabanlong:
Hello, Tamiza, and hello, everyone. Yes, this is a very nice discussion, just in time, because most of the countries here in ASEAN are gearing towards digital development. But however, with disruptions, ransomware and everything, these developments are affected. So in an increasingly digitized world, where our lives are intertwined with technology, there’s a need to have a robust cybersecurity governance, if I may say. The rise of the digital age has ushered unparalleled inconvenience, but it has also exposed us to unprecedented risk. So to achieve this seamless digital governance and avert potential disruptions, it is imperative that we prioritize cybersecurity governance as a top priority. And in ASEAN, based on experience, leaders are not interdisciplinary. As we have said, most of the leaders in the government, or in some areas in ASEAN, only understands a single expertise. For example, if you’re the policy guy, but you’re not a technical guy. But I believe what we need to achieve digital transformation and secure digital landscape is to have an interdisciplinary leader who understands all aspects of cybersecurity and as well as the digital aspect of development. So I believe without proper cybersecurity governance, organizations and governments are akin to sailing without a rudder in treacherous waters, leaving themselves exposed to potentially catastrophic consequences. Thank you, Teresa.

Tereza Horejsova:
Thank you very much, Alan, for that. Yes. And I mean, yes, it comes down to people very often. That’s why we’ve also brought up the issue of cyber capacity building and capacity building in general so often in the session. So thank you for reconfirming that. Just to have an idea, how many more comments in the room we will have to plan? One. Very good. So quick reflections to Yasmin. Quick reflections to Michael. And then we go over to you, sir. Yes. If you want to go now. Okay, please go ahead then.

Audience:
I’m from the government of Sri Lanka. I’m a civil servant and I served in New York as a diplomat by the time the SDG was formed. I was part of the open working group. I knew that it is challenging to internalize everything under SDGs, but when it comes to development, practicing and funding partnerships, I’m not sure whether the development or the community is looking into this aspect and what portion should be allocated in the budgeting and planning on critical information infrastructure. Is there a particular guideline for certain fund seekers and other partners or if there is a formula or something that can work around this, I think it would be helpful. So Sri Lanka, we adopted the cybersecurity strategy and developed the cybersecurity policy as well. These things have to go into the standard organizational structures as well as periodic development projects as well. That’s my comment.

Tereza Horejsova:
Excellent question. Thank you very much for that. For everybody’s information, we have about seven minutes left. Yasmin, Michael, Johan, maybe to share with us what comes next and then 20 seconds for wrap up. So let’s reflect shortly. Thank you.

Yasmine Idrissi :
I’ll make sure to be very brief. Thank you for your question. So definitely there needs to be thinking of cybersecurity in development projects and I’m always sort of adamant in trying to look into other fields for lessons learned. We cannot look into climate, for example, in climate negotiations because it’s also highly technical. It’s also quite, there’s a specific expertise and it might be very much intimidating for policy people, diplomats. But now there’s this understanding that it’s obviously highly interlinked with development as well. So let’s look also into other lessons learned from other fields. I mean, holistic views of governance are not something new and so I think it’s important and so I think it can be done in cybersecurity as well. Totally agreed. Michael? Sure thing. Thank you.

Michael Karimian:
So just a quick reflection on Patrick’s example and the colleague from Sri Lanka as well. Patrick, your example of the diplomat being asked to work on a project related to blockchain in judiciary. We’ve seen the digitization of court systems. In some contexts, you will see donor agencies and development practitioners use things like a needs assessment and a feasibility assessment and an impact assessment. Surely, based on these discussions, we should end up in an end point whereby there is a real assessment of the cyber feasibility and the cyber needs and the cyber impacts. Chris mentioned the institutionalization of human rights, took a long time to get to that point, but we’ve seen that in those sorts of processes or variations of those processes. Hopefully we can end up in a similar point with cyber too. And the colleague from Sri Lanka, the question on budgets. I think it’s important to have a mindset whereby we don’t think of cybersecurity as a cost, but as an investment. The one that pays dividends over many years to come.

Tereza Horejsova:
Thank you very much, Michael. I’m afraid that the time is really ticking. So, Johan, may you share with us briefly what are the next steps of the consortium of the project?

Johan Eckerholt:
Yes, thank you very much. I think it was a very, very useful discussion. We take this with us and we will have another consultation coming up in Singapore. And the aim that we are hoping is to produce guidance for this by December. So that will be something that we hope to be able to consolidate your points in a way that is useful so we can take this issue forward. Thank you.

Tereza Horejsova:
Thank you very much, Johan. If any of you do happen to be attending the Singapore International Cyber Week next week, please let us know so that we make sure to invite you to a consultation that will be taking place more in the kind of Southeast Asian context. Please do let us know. Coming up next will also be a session at the GC3B. And as I mentioned at the beginning, we hope to publish the compendium in December. With that, I would like to thank all of you for listening, to all of you who have shared your experience with us, to online Mokhtar and Alan for all of your inputs, and here in the room to Chris, Yasmin, Michael and Johan. Have a good rest of the IGF and see you around. Thank you. Thank you.

Allan S. Cabanlong

Speech speed

121 words per minute

Speech length

234 words

Speech time

116 secs

Audience

Speech speed

170 words per minute

Speech length

162 words

Speech time

57 secs

Christopher Painter

Speech speed

211 words per minute

Speech length

1304 words

Speech time

370 secs

Johan Eckerholt

Speech speed

165 words per minute

Speech length

1610 words

Speech time

587 secs

Michael Karimian

Speech speed

201 words per minute

Speech length

1023 words

Speech time

306 secs

Moctar Yedali

Speech speed

143 words per minute

Speech length

1338 words

Speech time

562 secs

Patryk Pawlak

Speech speed

180 words per minute

Speech length

706 words

Speech time

235 secs

Tereza Horejsova

Speech speed

166 words per minute

Speech length

2276 words

Speech time

825 secs

Yasmine Idrissi

Speech speed

185 words per minute

Speech length

690 words

Speech time

224 secs