Leave No One Behind: The Importance of Data in Development | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Samuel Nartey George

In a series of discussions, the importance of including all communities, including rural areas, in data governance was emphasized. It was noted that decisions should be based on data from diverse communities, but there is often a discrepancy in the data collected from urban and rural areas due to differences in connectivity and affordability. To address this, it was suggested that data governance should prioritize inclusion to ensure fair decision-making.

Another topic discussed was the need for affordable internet access and devices to promote comprehensive digital footprints. It was highlighted that underprivileged communities face barriers, such as expensive smartphones, which prevent them from fully participating in the digital world. To overcome this, the idea of creating cheaper “generic” technology, similar to generic pharmaceutical drugs, was proposed. This would make internet access and devices more affordable, enabling a more inclusive digital footprint.

The concept of affordable, generic devices was further explored, suggesting manufacturing cheaper devices on the African continent itself. Drawing inspiration from the pharmaceutical industry, the goal is to make technology accessible to all and bridge the digital divide, particularly in underprivileged communities.

Additionally, the potential transformative impact of connecting unconnected communities was discussed. Access to online educational materials was seen as a way to provide young people in these areas with employable skills, benefiting their economic prospects. Internet connectivity was also seen as crucial in establishing local businesses and livelihoods. Therefore, prioritizing internet connectivity in rural areas was deemed essential to unlock economic opportunities and educational advancements.

The significance of education, particularly digital skills, was emphasized. It was recommended to prioritize digital skills development to enable individuals to thrive in the digital era. One suggestion was to allocate a portion of the constituency development fund for acquiring digital skills, ensuring that individuals are equipped for the digital age.

Partnerships with the private sector and civil society were seen as essential in achieving the goals discussed. These partnerships would facilitate the transfer of necessary skill sets and support the implementation of initiatives aimed at promoting inclusion, connectivity, and digital skills development.

During the discussions, it was noted that Africa is being exploited not only for its natural resources but also for its data, largely due to a lack of understanding among leaders about the economics of data. It was emphasized that African countries need to prioritize and regulate their data usage to protect their interests.

Implementation checks of cybersecurity legislation and data protection laws were also highlighted. It was observed that while some countries have these laws, proper enforcement is lacking. It is necessary to have rigorous implementation checks to ensure effective cybersecurity and data protection measures.

Overall, the discussions emphasized the importance of inclusion in data governance, affordable internet access and devices, partnerships, education, and regulation of data usage. Addressing these issues can promote digital inclusion and protect data in Africa, leading to sustainable development and benefiting individuals and society as a whole.

Lee Mcknight

Data rights, privacy, and security are vital components that should be integrated into the governance framework of any community, village, or city. It is essential that citizens’ data rights are determined by the people living in the community, ensuring that their data is not harvested automatically without consent by external entities.

To protect citizens’ data rights, collaboration with the Africa Open Data and Internet Research Foundation has been established. This collaboration aims to bring connectivity to communities, with a primary focus on safeguarding citizens’ data from being harvested without consent. By working together, they are ensuring that individuals have control over their own data and that it is not exploited for external purposes.

In addition, community networks play a significant role in providing connectivity to the unconnected, enabling them to be included and accounted for in data. These networks have been advocated for by the Internet Society and have shown success in various cases. For example, in a previously disconnected community in Chile, the mayor states that thanks to a community network, her community now exists in the data pool. This demonstrates the positive impact of community networks in bridging the digital divide and ensuring that everyone has access to connectivity.

Moreover, advancements in technology have provided new opportunities for community networks. Today, these networks can incorporate energy solutions, such as portable microgrid solar-powered units. This innovation allows for longer connectivity durations without the need for additional infrastructure. A small portable microgrid solar-powered unit developed at Syracuse University has been deployed in over 20 countries, particularly in Ghana and the Democratic Republic of Congo. This infrastructure-less network not only provides connectivity but also addresses the issue of limited access to affordable and clean energy in many communities.

In conclusion, embedding data rights, privacy, and security into the governance framework of communities is crucial. Citizens’ data rights should be determined by the community members themselves, protecting their data from being harvested without consent. Collaboration with organizations like the Africa Open Data and Internet Research Foundation plays a vital role in achieving this goal. Additionally, community networks offer a solution to bridge the digital divide, ensuring that the unconnected are included and accounted for in data. By incorporating energy solutions, community networks can provide longer connectivity durations without the need for extensive infrastructure. These efforts collectively contribute to creating a more inclusive and secure digital environment for all.

Audience

The importance of education and skill acquisition in digital fields for African nations is emphasized in the analysis. It highlights Ghana’s ‘Girls in ICT’ program as an example of efforts to impart digital skills to girls in secondary schools. This program recognizes the significance of providing education and training in digital technology to equip the future workforce.

Furthermore, the analysis suggests that Africa should leverage its data assets and burgeoning internet growth, rather than giving them up indiscriminately for development aid. With the projected boom in internet users in Africa, there is an opportunity for the continent to harness its data resources and drive economic growth. By utilizing data and investing in digital infrastructure, Africa can create economic opportunities and bridge the digital divide.

However, concerns are raised about the excessive collection of data in Africa without appropriate data protection laws. The lack of a human-rights-based approach in data protection laws in most African countries raises potential implications for the future. The analysis points out that accountability for data breaches is often lacking, indicating a need for stronger data protection measures.

Additionally, current data protection laws in Africa often lack necessary elements such as accountability, equality, empowerment, and legality. It is highlighted that some countries enact data protection laws as a formality, rather than out of real necessity. This undermines the effectiveness of these laws and leaves individuals vulnerable to privacy and data breaches.

The issue of sensitive data being stored abroad due to the lack of local storage infrastructure is also raised. For instance, in Togo, electorate biometric data is stored with a private company in Belgium, and the contracts for such data storage are not typically accessible for scrutiny. This lack of local storage infrastructure poses risks in terms of data security, sovereignty, and control.

To address these concerns, the analysis suggests that Africa needs to build the capability to implement effective data protection laws. Despite having data protection laws, some countries, like Togo, lack an agency to effectively implement them. It is highlighted that a regional data registry is being constructed in West Africa with funding from the World Bank. This initiative aims to enhance governance and strengthen the implementation of data protection laws.

In conclusion, the analysis emphasizes the importance of education and skill acquisition in digital fields for African countries. It also highlights the opportunities for Africa to leverage its data assets and burgeoning internet growth for economic development. However, there are concerns regarding excessive data collection without appropriate protection, the lack of accountability in current data protection laws, and the need for local storage infrastructure. The analysis underscores the necessity of building the capability to implement data protection laws and advocates for a cautious approach, highlighting the importance of robust, human-rights-based data protection laws.

Victor Ohuruogu

The UN Foundation’s Global Partnership for Sustainable Development Data is focused on enhancing the availability, accessibility, and utilization of high-quality data for decision-making. Their efforts are geared towards improving the timeliness of data, fostering inclusivity of marginalized groups in the data value chain, and promoting accountable data governance. With over 600 participants from state and non-state actors across 35 countries, this global network is committed to advancing the cause of data-driven policy-making, bolstering SDG 17 – Partnerships for the Goals.

In Africa, there is a pressing need for data literacy and capacity building. The region faces significant challenges in terms of understanding data from both political and technical perspectives. To address this, the Global Partnership conducts programs aimed at enhancing comprehension of various data types and their usage. By empowering individuals with the necessary skills and knowledge, they aim to bridge the capacity gap and facilitate the effective utilization of data in Africa. This aligns with SDG 4 – Quality Education and SDG 17 – Partnerships for the Goals.

Although data holds tremendous potential for informing political decisions, it often lacks prominence in the political space. Many politicians do not fully consider data while making decisions, which can hinder evidence-based policy-making. By elevating the political profile of data, the Global Partnership seeks to strengthen the connection between the private sector and government. This collaboration can contribute to more robust and informed decision-making processes, aligning with SDG 16 – Peace, Justice and Strong Institutions and SDG 17 – Partnerships for the Goals.

With crises like COVID-19 further highlighting the importance of data-driven decision-making, the effective application of data becomes crucial in the humanitarian sector. The Global Partnership recognizes this significance and actively collaborates with humanitarian organizations and Presidential task forces to identify gaps in infrastructure, including computing infrastructure. By strengthening capacity in utilizing both infrastructure and data, policy and decision-making in the humanitarian sector can be considerably enhanced. This effort supports SDG 9 – Industry, Innovation, and Infrastructure and SDG 17 – Partnerships for the Goals.

Moreover, the proper management and implementation of data sovereignty issues are emphasized. Individuals whose data is being collected should have a say in how it is used, while considering the principles of data governance. The development of data governance skills within public sector institutions is crucial for ensuring that data sovereignty is respected and protected. These initiatives align with SDG 16 – Peace, Justice and Strong Institutions.

In conclusion, the UN Foundation’s Global Partnership for Sustainable Development Data is actively working to improve the availability, accessibility, and use of quality data for decision-making. Their efforts include initiatives such as enhancing data literacy, advocating for the political prominence of data, and strengthening data utilization in the humanitarian sector. By addressing capacity gaps, promoting accountable data governance, and engaging both the public and private sectors, the Global Partnership contributes to achieving the Sustainable Development Goals.

Kwaku Antwi

The speakers emphasized the significant impact of data as a crucial driver of economies, often referred to as the “new oil”. They highlighted how data has become the focus of global conversations and has the potential to revolutionize industries and drive innovation. Open data was also discussed, emphasizing the importance of making information easily accessible on various platforms. This allows for the sharing of valuable information across sectors and encourages collaboration and innovation. However, it was acknowledged that the digital divide poses a challenge to accessing data due to limited internet connectivity in some communities. Bridging this divide was emphasized to ensure equal opportunities for all. The speakers also stressed the importance of empowering communities with skills to effectively utilize data and set up networks. Open data and internet connectivity were seen as transformative forces in education, healthcare, agriculture, and other sectors. The conclusion highlighted the need to recognize and enhance Africa’s capacities in internet connectivity to drive transformation through the exchange of open data. Overall, the discussions underscored the crucial role of data and the potential of open data and internet connectivity to contribute to Africa’s inclusive growth.

Dr. Smith

In Africa, the implementation of data initiatives plays a significant role in accelerating progress towards achieving the sustainable development goals on the continent. These initiatives have the potential to address key challenges and support sustainable development in Africa, which faces a unique set of challenges and opportunities. By leveraging technologies and data, Africa can address issues such as poverty, inequality, and environmental sustainability.

One of the main arguments is the importance of implementing data initiatives in Africa. These initiatives can help African countries overcome various obstacles, including limited access to resources and infrastructure. By harnessing the power of data, governments and organizations can make informed decisions and develop evidence-based actions to address pressing issues. This can lead to improved service delivery, better governance, and enhanced economic growth.

It is crucial to address challenges such as data privacy, cybersecurity, and infrastructure development to ensure that these technologies benefit all segments of society, including the most vulnerable. Data privacy and cybersecurity are essential to protect sensitive information and maintain trust in digital systems. Additionally, investing in infrastructure development is necessary to ensure reliable connectivity and access to digital technologies across the continent.

The collaborative efforts between government, private and public sectors, and civil society organizations are vital for the successful implementation of data initiatives in Africa. Governments, along with the private and public sectors, must work together to create supportive systems and policies that enable the effective use of data technologies. Civil society organizations also play a crucial role in advocating for transparency, accountability, and inclusive decision-making processes.

By effectively using technologies, African governments can lessen existing challenges and continue to create more sustainable, inclusive, just, and prosperous futures for their citizens. Embracing innovative technologies can help bridge the digital divide, promote inclusivity, and empower marginalized communities. This, in turn, can lead to reduced inequalities, increased access to quality education, and stronger institutions.

The idea of Pan-Africanism, which recognizes our shared humanity and the importance of unity among African countries, is another noteworthy argument. Furthermore, the idea of a United States of Africa, which has been discussed since the Organisation of African Unity (OAU) days, is not as futuristic as it may seem. Both concepts highlight the importance of regional integration, cooperation, and solidarity among African nations.

However, achieving these goals requires grassroots mobilization and the active involvement of citizens. Leveraging technologies can help move this social movement forward by facilitating communication, organizing campaigns, and raising awareness. The united efforts of individuals, communities, and organizations are crucial in realizing the vision of a global Africa or a United States of Africa.

In conclusion, the implementation of data initiatives in Africa is essential for achieving sustainable development goals. It is vital to address challenges such as data privacy, cybersecurity, and infrastructure development to ensure that these technologies benefit everyone. Collaborative efforts between government, private and public sectors, and civil society organizations are crucial for creating supportive systems. By effectively using technologies, African countries can create sustainable, inclusive, just, and prosperous futures. The concepts of Pan-Africanism and a United States of Africa are not far-fetched, and grassroots mobilization is needed to achieve these goals.

Usman Alam

The Science for Africa Foundation, a pan-African organization that funds research and innovation across the continent, emphasized the crucial role of locally generated, governed, and diverse data for driving impact in Africa. They highlighted the need for diversity, equity, and inclusion in data, especially in the African context and with regard to women. The Foundation also highlighted the challenge of limited access to data, even at high governance levels, due to data being housed in specific ICT ministries. This indicates a need for greater collaboration and coordination in data governance.

Advocacy for equitable partnerships and the prevention of governance in silos was another key point raised. Usman Alam, in his advocacy work, underlined the importance of fostering partnerships that are fair and inclusive. He emphasized the significance of locally generated data that reflects the diverse facets of the demographic, as this ensures a comprehensive representation of the population. Alam cautioned against the risk of governing in silos, as it can hinder access to data, even at high government levels. This highlights the importance of breaking down silos and establishing collaborative frameworks for data governance.

Connectivity was also discussed as a transformative factor in driving research and innovation within the African context. The availability of connectivity can change how research and innovation are conducted and has the potential to unleash the full potential of individuals and communities. The concept of a community of practice was suggested as a means to foster new funding and implementation approaches, facilitating greater connectivity and collaboration in research and innovation endeavors.

Promoting equity through the hub and spoke model of funding was presented as a promising strategy. This model is based on partnering with other stakeholders to provide equal opportunities for all. It offers the potential to empower women’s leadership and strengthen the connection between government, researchers, and data. By fostering collaboration and sharing resources, the hub and spoke model can contribute to reducing inequalities and promoting equitable development.

Trust issues relating to the handling and sharing of personal data were recognized as a concern, particularly within the academic and expert community. This indicates the need for robust data governance frameworks and mechanisms to address these trust issues. Building trust is crucial for ensuring the effective and responsible use of personal data, thereby strengthening institutions and promoting peace and justice.

Lastly, the importance of harnessing endogenous knowledge for sustainability was highlighted. The successful response to the Ebola outbreak in Sierra Leone, Liberia, and Guinea underscored the value of utilizing local knowledge and expertise. Leveraging endogenous knowledge in the continent’s healthcare management can lead to more effective and culturally appropriate solutions. This highlights the significance of recognizing and leveraging local expertise and knowledge for sustainable development.

In conclusion, this analysis emphasizes the critical importance of locally generated, governed, and diverse data in Africa. It highlights the need for diversity, equity, and inclusion in data, the challenges of limited access to data, the value of equitable partnerships and the prevention of governance in silos, the transformative potential of connectivity, the role of the hub and spoke model in promoting equity, the trust issues surrounding personal data, and the value of harnessing endogenous knowledge for sustainability. By addressing these challenges and leveraging these opportunities, Africa can harness data and knowledge to drive positive impact and sustainable development.

Moderator – Yusuf Abdul-Qadir

The discussion highlighted several key points regarding the use of data and technology to enhance connectivity and drive development. Moderator Yusuf Abdul-Qadir emphasized splitting the conversation into two key components. The first component involves addressing gaps in data use and strengthening data ecosystems. This entails identifying and bridging any existing gaps in data usage, encouraging the effective use of data, and enhancing the overall data ecosystem. The second component focuses on leveraging technology and community networks to ensure universal connectivity. This involves leveraging technological advancements and community networks to provide connectivity to even the most remote and disconnected areas.

Inclusivity in accessing and leveraging data was also underscored as a crucial aspect. Ensuring that everyone is included and that no one is left behind in discussions on data access and usage is of utmost importance. However, specific strategies or approaches for achieving this inclusivity were not provided.

Community networks were praised for their ability to bring connectivity to previously disconnected areas. These networks are created by people to cater to the specific connectivity needs of their local communities. The Internet Society has been a strong advocate for community networks. An example of their effectiveness was highlighted by a formerly disconnected community in Chile that established a community network during the pandemic.

Furthermore, the integration of connectivity solutions with sustainable energy sources was deemed effective in enhancing the impact and efficiency of community networks. Syracuse University, in collaboration with the Worldwide Innovation Technology Entrepreneurship Club, has developed connectivity solutions that are packaged with portable, microgrid solar power sources. These solutions have been successfully deployed in over 20 countries and are currently being used in Ghana to connect school children in libraries.

The discussion also recognized that access to the internet and data has the potential to unlock people’s fullest potentials and affirm their existence. Data and internet access play a crucial role in acknowledging the interconnected nature of communities and fulfilling mutual obligations. This perspective aligns with the concept of Ubuntu, which advocates for interconnected existence.

Yusuf Abdul-Qadir supported the idea of using open data and community networks to facilitate the United Nations Sustainable Development Goals and unlock human potential. He believes that technology and data can unite the continent and drive development, supporting the notion of a United States of Africa as a way to foster a connected and inclusive continent.

The transformative power of internet connectivity and open data was acknowledged in various sectors such as education, healthcare, and agriculture. Internet connectivity allows for the sharing of information in an open environment, enabling advancements in these sectors. The availability of cloud infrastructure and access across diverse sectors was seen as essential for enhancing capacities and ensuring digital inclusion in the African context.

Additionally, the discussion emphasized the importance of gender equality and good health and well-being. Maximizing human potential requires advocating for gender equality and prioritizing good health and well-being. Connectivity has the potential to significantly impact these sectors, leading to positive outcomes for overall development.

In conclusion, the discussion provided valuable insights into the importance of data use, technology, and connectivity in driving development and achieving the United Nations Sustainable Development Goals. The need for inclusive access to data and leveraging community networks was emphasized. Moreover, the integration of sustainable energy sources with connectivity solutions was seen as effective. Internet connectivity and open data were recognized for their transformative power, while the importance of gender equality and good health and well-being was highlighted. Overall, the discussion underscored the immense potential of harnessing data, technology, and connectivity to unlock human potential and foster a connected and inclusive society.

Session transcript

Moderator – Yusuf Abdul-Qadir:
Mic check. Okay. Welcome. From Kyoto, Japan. For our discussion. Entitled leave no one behind the importance of data in development. If you’re here, you’re in for a treat. We have some dynamic panelists here with us in person. And in line with the theme of today’s discussion of not leaving anyone behind, we have those who will be joining us virtually. Before we get into it, and before I kind of get into a long monologue here, I want to invite my esteemed colleague and friend, Dr. Danielle Smith of Syracuse University, to open us with some opening remarks. Dr. Smith.

Dr. Smith:
Thank you, Yousef. Greetings to the session’s organizers, the presenters, the audience here in person and virtually, and to all those attending the UNIGF in Kyoto. I am truly honored to welcome you to this session. And I would also like to thank the people of Japan for your very warm hospitality. I’m very thankful for the leadership of Wissam Donkor and Kwaku Antwi at Africa Open Data and Internet Research Foundation, who are joining us virtually. Their tremendous support in planning this session has been instrumental. As we know, there are many ongoing data initiatives around the world. Implementing data initiatives in Africa can play a significant role in accelerating progress towards achieving the sustainable development goals on the continent. Africa faces a unique set of challenges and opportunities. And leveraging these technologies can help address key issues and support sustainable development. However, it is also important to address challenges such as data privacy, cybersecurity, infrastructure development, and ensuring that these technologies benefit all segments of society, including those who are the most vulnerable. In addition, governments, the private and public sectors, and civil society organizations must work together to. create supportive systems for the implementation of these diverse initiatives. By effectively using such technologies, African governments can lessen existing challenges and continue to create more sustainable, inclusive, just, and prosperous futures for their citizens. The session presenters are experts in this area and can help us understand these initiatives and broader global trends. It is particularly important to learn about developments on the ground and from experts who are in the field. Thank you again for joining us, and we look forward to an informative session. Thank you.

Moderator – Yusuf Abdul-Qadir:
Thank you, Dr. Smith. As I said, we’re going to get right into this discussion. And for those joining us in person and virtually, we’ve decided to split this conversation into two main components. The first component is addressing gaps, encouraging data use, and encouraging strengthening the data ecosystems. The first set of conversations will be situated in that piece here. And then the second component is leveraging technology and community networks to make sure that everyone gets connected. It’s essential that we don’t just theoretically have a conversation about ensuring access to data, leveraging data, but making sure that everyone is included and that no one is left behind. As I said, we have an amazing set of panelists here with us in person and online, and we’re going to get right to it. So to kind of begin, I want to start with you. Let’s see, Victor Ohuguru. Forgive me and correct me in your presentation for me not pronouncing your name correctly, who’s a senior Africa regional manager at the UN Foundation for Global Partnership for Sustainable Development Data. I want to begin with you. If you can just please give us a minute or two of opening remarks and let us hear how you’re doing this work at the UN. We don’t want to leave you behind, so let’s look. Well, while we get Victor, let’s go to Kwaku. Kwaku Antwi is a leader with this collective here from the African Open Data and Internet Research Foundation. Kwaku has been, as Dr. Smith mentioned, an important leader in this conversation and someone who has helped to drive the conversation. Kwaku, if you could please introduce us to yourself and please inform us as to how the AODIRF is leading the way and making sure that not just that communities have access to open data, but what are the tools that are necessary to accelerate the SDGs?

Kwaku Antwi:
Thank you, Yusuf, and hello to everybody. My name is Kwaku Antwi from the African Open Data and Internet Research Foundation. I’m in charge of the community outreach and projects and also in organizing events around open data initiatives across Africa through our network. I think one of the most important aspects we recognize in our current dispensation in this digital world, it’s being informed or being part of what is going on in our society. Data, as they say, is a new oil which is driving our economies. And being able to access data and utilizing data is also very important for all of us. I mean, as we speak now, there’s a lot of information ongoing as we are participating in this year’s IJF in Kyoto. And when we talk about data and open data, we talk about data which is available in formats which are easily accessible on portals or repositories which do not require enormous and mitigating. circumstances for you to not be able to access that data. Open data, we can say is one of the biggest drivers of open communities and also being able for people all across the world and in communities to be able to access information. One beautiful aspect about open data is that it encourages not just the private sector, government and all other sectors to be able to share their data, to be able to have people utilize this data for purposes and where we’re able to strengthen ourselves and also enforce where there are data gaps in which we can be able to share and also improve our societies. Well, in accessing this data, we all know that we’re in a digital world now and data is not just on hard copies in some libraries or some safe havens or safes where it is and you need to be able to have the other data which is internet connectivity to be able to access this data. And that’s where we also come in in which we are bridging this divide in terms of connectivity and setting up community networks and also helping the communities themselves to have the skills to set up a network, to have the skills to be able to utilize this data, interpret it and understanding the data for themselves and also being able to transmit the data in formats which are usable, acceptable and also safe for them. So those are my open remarks and I leave the floor for the rest of the panel.

Moderator – Yusuf Abdul-Qadir:
Thank you Kwaku. I want to jump to my colleague at Syracuse University, Dr. Lee McKnight. Lee, as Kwaku said, data is gold. It is valuable. Many companies are in an AI race right now where they’re leveraging data in ways that are helping to accelerate their economic opportunities but we’ve done work in the past around ensuring that not just that we ensure that data is accessible but that we preserve people’s rights. Can you talk about the relationship between expanding access and internet connectivity with ensuring that that data is governed properly and appropriately and can you lead us into some solutions onto how you manifest that in your work?

Lee Mcknight:
Thank you. Thanks Yusuf and thank you all for being here virtually or in person and engaging in this very important conversation. I want to recall back to 2008, IGF in Hyderabad, perhaps somewhere here, there or there at that time when the coalition, Dynamic Coalition on Internet Rights and Coalition on Internet Principles agreed that it didn’t make sense to have two coalitions on rights and principles but there really should just be one going forward. of the following year. Since then, there’s been a charter on internet rights and principles created. Following that, over work with you, we’ve taken that work forward on embedding in the virtual space rights and principles for governance, for whether it’s for data rights, for privacy, for security. That now has been extended closely with you, Yusuf, to smart cities and communities. Any village, any community can be a smart community, can have embedded in its governance framework rights and principles, including for data rights. So that’s going forward to the present, where now with the work also with the Africa Open Data and Internet Research Foundation on bringing connectivity to communities anywhere in the world, we can help ensure that the rights and privileges to citizens’ data are determined by those people who live there, and they’re not automatically harvested by external forces without the consent of the community.

Moderator – Yusuf Abdul-Qadir:
Thank you. I think along those lines, we have the honorable. Can we just say this again? Honorable Samuel Narti George, a member of parliament from Ghana here with us. And Dr. McNight explicitly mentioned the importance of ensuring nothing about us without us. In essence, that we should not be accessing and determining governance principles around data without ensuring that the communities who are directly impacted have not just a voice, but are driving the conversation. Can you talk a bit about the role that you as a member of parliament can play in ensuring that data governance is inclusive of the voices of your constituents, and the work that Ghana is doing to accelerate access to data and making sure that the data ecosystems are secure and respecting your citizens’ rights?

Samuel Nartey George:
All right, thank you very much. Good morning, good afternoon, good evening. depending on what part of the world you are in. I believe that the conversation about doing this for everyone, inclusive of everyone, is extremely critical. And for me, it highlights a major disconnect because we have this conversation about the West leaving Africa, but we don’t discuss the disconnects inside of our own countries in Africa between our capital cities and the rural communities that are underserved or unserved. Because governments and parliament has to take decision on the basis of data that’s generated. A lot of this data is generated from e-government portals and services that people access online. Now, the question you need to ask yourself is the connectivity in Accra, for example, is different from the connectivity in a rural community in the northern part of Ghana. And so that data that parliament or the Ministry of Finance is going to be using to advise parliament in terms of resource allocation is going to be skewed based on the data, the source of that data, which is skewed towards the urban areas where people have higher spending power and are able to buy data, because we joke about it, but possibly what I spend on data in a week is actually a whole family up north, the whole subsistence of that family of six people for a whole month. And so the question is, if data is not as cheap and accessible and platforms are not accessible, people are not contributing to the data pool. And so we need to look at the disconnect and the digital gaps inside of our own countries on the African continent between our urban areas and underserved areas. And that’s where the community networks coming. And that’s where you have in Ghana, for example, our universal access fund, GIFEC, trying to close that gap and do last mile connectivity. I keep saying that we have a lot of conversations on these platforms about connectivity, bridging the connection, the connectivity gap, but we’re not talking about whether that connectivity we’re bridging is actually accessible or affordable. Because it’s one thing to bring a network into a community, it’s another thing determining if it’s at a price that the individuals in that community can hook up to the service. Because if you don’t get the data from the people in the underserved area, we will continue to make decisions in parliaments, in capital cities, that are skewed away from the needs of the people on the ground. And so that’s where the real disconnect is, and that’s the real quagmire that I think we need to figure out, how do we get, because government is increasingly making its decisions on the back of data sets that are generated by people’s, by digital footprints of citizens. But if in our countries we have citizens who do not have a digital footprint, because they don’t have access to internet, or even when you bring internet at very economically affordable prices, the cost of smartphones is inhibitive, because there are various segments to connection. It’s the connectivity itself, then the cost of the connectivity, and then access to that connectivity on a device. And so for me, I’m beginning to champion a case of saying that, just like in pharmaceuticals, where you have generic drugs, because the big pharma, big pharma has made profit from its intellectual property, and so a drug that’s produced by Bayer or Pfizer could cost about $100 for a sachet, but I could get that same drug by an Indian generic maker, same efficacy, but not the same brand name for $5, and by doing that in pharmaceuticals, we should begin to do the same thing in technology, where the likes of Apple and Samsung have made a lot of money off the intellectual property, we should begin to have generic devices that are going to be cheaper, that are assembled on the African continent, and then would make it easier for people to have digital footprints, because a citizen without a digital footprint cannot be part of the data sets that government is using to take decisions for them.

Moderator – Yusuf Abdul-Qadir:
As I said, honorable. Dr. Uzma Alam here from Science for Africa Foundation. Join us virtually. Dr. Uzma, really appreciate you being here. As a public health practitioner, data is key to understanding how we can solve, especially in context of the pandemic that we’ve left, are kind of leaving, or may still be in, depending on where in the world you might be. Data has been tremendous in both deploying public health resources and understanding how we are going to be efficient in ensuring everyone is taken care of. Can you please share with us a bit about what you’re doing at Science for Africa Foundation and the role that data will play from a public health perspective?

Usman Alam:
Thank you for that. And greetings, everybody, from Nairobi, Kenya. And a big, big thank you for the organization. And it’s been really exciting for me to hear the panelists who came before me, because where we are, like where the Science for Africa Foundation plugs in, we are towards the end of, we would be benefiting from what some of the panelists have started doing, especially in health. So the Science for Africa Foundation, just for context, is a pan-African organization where we fund research and innovation across Africa. But we also work with designing programs and providing ecosystem strengthening. And within that, we have a science policy engagement portfolio. And that you pointed out, too, looks at how can we drive value from African-generated data, and how do we actually stimulate what the honorable member of parliament just mentioned? How do we ensure that Africa is responsible for generating its own data? But also, how do we govern that? And I think critical issues and threads of this have come up, but something I just would like to highlight for context of this conversation is, I’ve been hearing the word data, data, data. But I think within data, what our work’s pointing to and what I think the discourse should be focused upon, or to even answer your question directly, what will take us from data to impact, is really those nuances within data. And what do I mean by that, right? So yes, there is data, but there is this need for diversity, equity, and inclusion in data. As we’ve already talked about, you know, west-driven data, a footprint that doesn’t match the African content. But within that, we have our women. Let’s not forget them, a big piece, when it comes to health, and especially the next pandemics, right? And within that, there’s also this whole concept within Africa that we really need to, if we need to get from data to preventing the next pandemic, like you said, or even drive impact. is this piece around governance, right? So Africa, yes, needs to generate its own data and we need to be responsible for governing it. But there’s this piece that, you know, we need to appreciate that data is obviously cross-cutting, right, whether it’s health, whether it’s agriculture, whether it’s finance and stuff. And what some of our work has been pointing to, especially when we start looking at governance around data policies in Africa, you know, they’re housed very specifically within, for example, majority of the ICT, you know, equivalent ministries of health, right? I mean, ministries of ICT. And that obviously has implications for how somebody in health can access that data, even at a local level, even within governments, right? And there is this fine balance of what, you know, what the mission of one is, and, you know, what the mission of the other is. I think, you know, my rallying call to this conversation and, you know, to get from data to impact would be, yes, you know, we need equitable partnerships. We need, you know, locally generated data, and that includes devices for it. But we also need to be very careful of how we govern, that we do not start governing in silos, that, you know, when the data exists, we can’t even have access to it, even at a very high levels. I think I’ll stop at that and hand back to you, Yusef.

Moderator – Yusuf Abdul-Qadir:
Thank you, wow. I think we still have Victor Uhurugu on the line. Victor, please, if I’ve, again, mispronounced your name, I am a stickler for saying people’s name right, so please let me know if that’s the case. But if you can please talk to us about your work at the UN, and in particular, how the UN is trying to bridge the gap between all of the respective conversations we’ve had here. We have academia here, we have governments here, we have civil society here, and the UN plays an important role as a convener. What are you doing at the UN, and how do you see these issues of data governance, particularly for Africa, manifesting themselves in ways that have. help to facilitate the Sustainable Development Goals.

Victor Ohuruogu:
Thank you very much. Can you hear me? Can you hear me? Yes, yes, we can hear you. Yeah, I think I’m Victor, Victor Horogu. Yeah, so good morning, everyone. Of course, good evening for some of you in some other parts of the world. I’m Victor Horogu. I work with the Global Partnership for Sustainable Development Data, which is warehoused within the UN Foundation. The UN Foundation is where we’re currently seated. The Global Partnership is a growing network of over 600 participants or partners, which includes state actors and non-state actors. These non-state actors are civil society organizations, the private sector, research, academic institutions, developer communities across the world. And these actors are set across about 35 countries in Africa, Asia, Latin America, and the Caribbean. So we’re all collaborating together to accelerate progress on sustainable development and on the SDGs particularly, but through better data. So together with our set of partners, we have looked at and we’re collaborating across three key systemic issues that have been identified together with our partners. And one particular set of that issue focuses on timely data. We do believe, and we have seen across the world that governments particularly need data on a very timely basis to making decisions and enabling the various policy instruments that they do put together. But these governments are not having that quick access to information, to data. And so we’re helping governments to make use of both non-traditional data forms and technologies that could help them have the best and quickest access to this data. We also look at… the issues of inclusive data, where we want to see marginalized groups, you know, have more agency in the data value chain. People who, you know, have been left out. We’re ensuring that governments and all other actors within the data value chain can focus on this set of people. And the third component of our program looks at accountable data governance. You know, we’re trying to unlock the opportunities of data for all, making sure that data is well governed, you know, by certain standard principles. And so what do we do in a particular country? My role covers Africa pretty much, where we have seen that there is a huge issue around capacity, just understanding what data is across different levels, both in the political and technical space, understanding what type of data is needed, you know, to drive certain policy, you know, issues, understanding how to even use that data in itself is a major issue. And so we have various programs that focuses on building capacity in terms of, you know, understanding what type of data is needed, where to source that data, how that data can be used. And we’re working with both, you know, all the actors within the value chain, particularly governments, but ensuring that we can strengthen, you know, connection and partnership between the private sector and government to drive the agenda of data. You know, we want to see how that data is given its prominence within the political space. Particularly, it would, you know, of course, not be too surprising for many of you that a lot of government actors, you know, make decisions that are not data driven. You know, of course, politicians, you know, are pretty much focused on what, you know, will get them into the place that they need to be. But oftentimes, very many of them do not reckon with data. So how do we push the political profile of data within government, but also working with the technical, you know, level guys to ensure that they have access to the right data sets that they need to support government in their various decisions and policymaking process. And so I will look forward to, you know, how that, you know, we could have it. broad-based conversation that bring all of the actors together, and we could work, you know, with all of you to address, you know, the issue of access, availability, and the use of this particular data for policy and decision-making within the continent. Thank you very much.

Moderator – Yusuf Abdul-Qadir:
Thank you, and thank you for correcting me in the pronunciation of your name. You know, as we said, this conversation will be split into two. I want to advise those who are joining us virtually that our colleague Lahari Chowdhury will be able to collect your questions. If you are joining us virtually and you have any questions, Lahari Chowdhury will be able to collect your questions in chat. That way, we can make sure that we are including everyone in this conversation. So, as I said, the first component of this conversation has kind of been addressed. We’ve talked with our dynamic panelists here, and I want to jump to, and I think actually Victor helped us really transition into the second component, the second piece of our conversation, which is leveraging technology and community networks to make sure that everyone, making sure data gets to everyone. You know, Dr. McKnight, I’ll start with you first. I don’t want to assume we have, we’re all operating on the same set of understanding of what community networks are and how we can leverage technology to both advance community networks and make sure that data gets to everyone. So, can you do two things for me? Can you first explain what are community networks? How do we ensure that it can be utilized as a mechanism to ensure access to data for everyone? And then talk a bit about some of the work that you’re doing around this particular set of questions. Sure. Thank you so much, Yusuf.

Lee Mcknight:
So, first, we can think about community networks, and I would give a lot of big credit to the Internet Society for all of its advocacy and work over many years in encouraging people to think not just of telecommunications or national level networks, but the fact that people can, in fact, build and create their own local networks. And so, that work has been ongoing for some time. I wanted to bring in here one example, maybe as this transition from the first part of the conversation to the second, and I forget her name. I should remember her name, but the mayor of a Chilean community that was previously disconnected until there was a community network during the pandemic. She said, we exist. That’s now she’s part of the data pool. Yes, she has to have rights and be protected, but now her community, she exists in a way she didn’t before. So, community networks provide a way to now bring connectivity to people, the 2.5, 2.6 billion people that exist, but they’re not counted. They’re not included in any way, generally speaking, in our conversations because they cannot reach us digitally. All right. So, now how do we go about this today? There’s many different technologies available to create community networks, and that great work has been done for some time. We here at Syracuse University, working with the Worldwide Innovation Technology Entrepreneurship Club, or WITEC, over decades, have developed a package. small form where it’s not enough to have connectivity. If you don’t have energy, right, you cannot stay connected for very long. Your battery life. So having a package that includes both a connectivity solution and is a tiny little portable microgrid solar powered, that’s been something that we’ve been evolving and has been deployed into over 20 countries now and is currently in use in Ghana for connecting school children libraries and further as its first deployment was in the Democratic Republic of Congo. So it’s possible now. It’s not like, this is not theory. This is just something that we come see it in the exhibit booth. Otherwise, you could have a more established, a larger community network with established towers and so on, but you don’t necessarily need to create any new infrastructure. We’ve talked about being the academic here, infrastructuralist networks. So we can have an infrastructuralist network that is not just a network, it’s also a microgrid that exists, that you can go see it. So this is not theory, this is fact, and we can take this, there’s 2.6 billion people that need to be connected. They exist. I’ll stop there.

Moderator – Yusuf Abdul-Qadir:
Honorable, and thank you for that, Lee. I’m struck with the we exist comment that kind of struck a particular core with me. And Ubuntu is a concept on the continent of I am because we are this notion that we are, we have an obligation amongst and with each other. Can you talk a bit about just the way that we go beyond, and I thought you put it beautifully, beyond connecting people from a very kind of academic or kind of theoretical perspective, but what does that do to demonstrate that we exist? How do we unlock people’s fullest potentials by providing them the access to data as well as the internet?

Samuel Nartey George:
Well, it literally just transforms the world. the world. It changes the entire economics of that locality. And I’ll give you a typical example of something that a project that we’re toying with in Ghana at the moment. If we were able to connect an unconnected community and then we could send them educational material, a young man who he or she would have had to go to a city center to learn a trade or go to a master craftsman could actually with a smart phone take models in how to become a bricklayer or a mason or become a skilled laborer. And that gives him an employable skill. That puts food on his table. So there’s an ability to run blended learning platforms are critical. COVID taught us a lesson in Ghana where kids who were not connected to the national grid lost a year of school. If we had community networks, because we actually put educational material on the Internet and on national TV, but some of these communities had absolutely no connectivity, be it electricity, TV or Internet. And so the kids in those schools have lost a year of their lives thanks to no fault of theirs. Now, if you’re able to connect these communities, you transform the whole ecosystem there. Because there’s someone there who’s now going to be able to run a business center. It brings a whole new lease of life to the people in there. And so, I mean, most of us in those rooms, even in capital cities, we do a lot more with data on our phones than voice calls. Our lives revolve around data. And you can just imagine what happens if you don’t have data. The first thing people ask for when they walk into an establishment, especially for all of us who have traveled here, the first thing I did at the airport was not to change money. idea that the airport was to get a data seam. Because it’s the only way I can stay connected. It’s the only way I can stay productive. If I don’t have data, I’m cut out. And so if we’re able to bring people to a place where they’re connected, you actually open a whole new specter. You just need to see the excitement in communities that get connected to 3G for the first time from 2G. Because when they’re just doing voice, they have absolutely no connection to the internet superhighway. And now the internet is actually where everything happens. The young people who’ve graduated school in urban areas who are able to make a livelihood by trading on Facebook, being able to sell, they buy things, they have a Facebook page or an Instagram page, and they’re selling. Now imagine that there’s a young man in the rural community who also has the opportunity to now become the guy who, if you need anything from the major city, he goes to pick it up, puts a little margin on it, and people in that community can actually just deal with him on WhatsApp. It doesn’t even have to be on Instagram. He can run a business page on WhatsApp where he advertises his wares, and he can transact business there. So there is real economic power that exists when you give people connectivity. Because I mean, many of us take these connections for granted. We use them for TikTok and Instagram and Snapchat. But the internet has real economic power for people who are in the most difficult positions. And that’s the power of transformation that we can bring. When we let people realize the transformative positive impact of the internet, either for business or for educational purposes, there’s a real life opportunities that can change the whole specter for people. And when people get these skills, it’s now a digital world. He can be sitting in that village and doing data processing for a blue chip company in the United States and get paid for it. Because now he’s able to lend data processing. or learn coding online, those are all opportunities that he, the two, he would have to leave that community and travel to an urban area where he most likely has nobody and be exposed to all the vagaries. But you can bring the world into the small device in the hands of that young person so long as you give them a connectivity. And I think that it’s something that as governments and as parliaments, we need to begin to prioritize to identify these communities and begin to reach out to them as a matter of course. Because for the telcos, most of those communities don’t make economic sense for them to go into in the first place. Because they’re looking at the numbers, they’re looking at the cost of running their infrastructure. And so when I hear Doc talk about infrastructure-less connections, those are the kinds of connectivities that we need. And for the kids who are using those connections in Ghana to access educational material, that’s material they would never have been able to access. But kids who are gonna write the same end-of-year exams with them who are in urban areas have access to those same materials. So you have kids going to write the same exams but are completely disadvantaged from the get-go. Now how do they pass and compete with these kids in the urban areas for limited slots in public universities? So this just bridges the gap and creates a whole new vista for these young people in those communities. And that’s why it’s very imperative that we take this as a very serious point.

Moderator – Yusuf Abdul-Qadir:
You know, I will be very transparent and say that I am a professed, avowed, and committed Pan-Africanist. And Kwaku and I have had a number of conversations personally around Pan-Africanism and the role that it plays in both facilitating a brighter future for African descendants across the globe. But Kwaku, if you could talk to us a bit about what open data and community networks can mean for helping to not just facilitate the UN Sustainable Development Goals, for not just unlocking, as the Honorable mentioned, the fullest human potential. of each of us, but also to build a United States of Africa to kind of facilitate for this connected, inclusive continent.

Kwaku Antwi:
Thank you, Yusuf. And I think the Honorable Member of Parliament has given us a segment. I think Dr. McKnight also spoke about it. Basically, when we talk about open data and also the infrastructure where you’re able to access on portals and with technologies and the data, it’s important for us just not to think. I think Dr. Ousmane talked about silos, OK? To not think about our domains where we are looking to ignite or digitalize or to push forth for the technologies to apply. And we’re just talking about education, OK? But there is endless possibilities to the innovation of the technologies and the data. So for example, I’ll just give a very short example we had this year. This year, we did an Open Data Day in which we celebrated open data in Accra. And what we did is that we brought together persons from the statistical part with the open data. We brought people back from the private sector in terms of those who use geospatial data technologies. And then we brought the space science technology. And guess what happened? They were talking to people. They were talking to themselves in the room. They were doing very similar jobs, which required a lot of data from a disaster, from climate to economic data to private geospatial data for all sorts of purposes. But guess what? They were transforming our communities. And what was the connection? What is this connection? Is the connectivity to be able to connect to the internet? to be able to talk to people, to be able to exchange. Today, I’m able to connect to you from where I am in Accra, Ghana, due to internet connectivity. I’m knowledgeable, I have that information, I’m sharing with everybody because I’m connected. I’m connected because we are all in an open environment in which we can share information. And this information is being recorded and it’s gonna be reposited in a portal or at a storage place where everybody can access. And this is the power and transforming nature of internet connectivity and the power of data and information that it brings and the richness to which. In Africa, we have this potential. And yes, we should and we are able to transform our communities with this kind of data and information that we have. Because being here and being, having that internet backpack in Winneba or in Tamale or in Wa or in Ho, I should not just be able to connect with the school children who are in this community, but I can also connect across the country. Not only so, but the cloud infrastructure that is available for you to be able to access should you have also interactive capacity in which is not just for education, but also our healthcare facilities, our agricultural facilities, and also all other facilities who are able to connect as we are doing across the continent. And as Yusuf, it’s important that we recognize our capacities and being able to enhance it in African context to be able to connect everybody, as you said, Ubuntu. We are all moving together in this forward together and we go ahead with everybody ahead. Thank you.

Moderator – Yusuf Abdul-Qadir:
Thank you Kwaku. Dr. Uzma, you and then Victor will be the last two before we open it up for conversation with those of us in the audience. Folks who may have questions online. please do send them in chat and Lahari Chowdhury will make sure to get those questions to us. And for those who are in the room, please, you don’t have to run up to the mics, but you’re welcome to also join us for questions. Dr. Uzma, the question for you is centered around good health, well-being, gender equality, and let’s leave it with those two issues first. You know, we’ve talked about unlocking and unleashing everyone’s potential. We’ve talked about the way that connectivity can ensure people’s fullest potential can be maximized, but from a practical, pragmatic perspective, this can have significant implications on ensuring good health and well-being and gender equality for women and girls. Two of the 17 SDGs addressed there. Could you please lean in a bit about those two topics for us, because we don’t want to make sure, we want to make sure that we are not leaving out an explicit call out for gender equality, for making sure that women and girls are included as well as good health and well-being. Dr. Uzma.

Usman Alam:
Oh, thanks. Thank you for that question. I love it. It’s really got me more excited. So, you know, if we pin our work around those two pillars you’ve said, right, and then bring in this piece of connectivity, it’s actually what everybody has said, right? It’s life-changing. And it’s life-changing not only for the end users, but in honesty, it’s going to be all of us, but what it also does is it changes how research and innovation is done within the African context. And that’s, you know, very critical. If this dialogue has been saying, we need to drive our research agenda, you know, we need to drive our innovation, but then, you know, as soon as we start saying that, we need to start thinking how are we going to fund this, right? And the only way we can, you know, ensure that this, that we take off, not take up the boxes, but, you know, we leverage on all these different areas, whether it’s gender, whether it’s, you know, data, whether, you know, you want to call it connectivity, is through these linkages, right? And the way I would like to look at these linkages or connections is through community of practices, right? And just to give you a small example of what this means in. in practice in how it’s implemented, right? So we are very focused on when we fund for whether it’s research or innovation to fund within this hub and spoke model, right? And where you have a lead organization that works with, you know, other organizations around. And these can be, you know, the private sector, the academic sector, the government sector. And when we say lead institutions, you know, the importance for us was like, if we just look at the funding landscape, right, whether it’s for health, whether it’s for innovation, whether it’s for agriculture, finance, or whatever in Africa, you know, there are pockets of where the funding goes. We know South Africa is strong. We know Kenya is strong. We know some of the North African countries are strong, but Africa is huge and there’s capacity across our 54 member states. So to ensure that we leverage these, you know, our model says, or works around the philosophy of, all right, you know, you’re a stronger lead institution, but you need to partner with the other stakeholders and, you know, bring in these other players that wouldn’t have access to this, right? And what that all of a sudden does is creates equity for us, right? Whether or not only in how we fund, but also, you know, how we bring in our women leadership, how we bring in other stakeholders, how we connect government to researchers and to data. And for us, so that’s a big piece of equity. So connections and connectivity is a community of, you know, practice all of a sudden translates into equity. But another thing, you know, a critical thing, and I’m surprised it didn’t come in our first discussion is when we think of data, there are lots of trust issues. We need to be honest, right? Even within academics, even within the experts. But once you provide this framework for sharing knowledge, exchange connectivity, you know, there’s this trust built. So that already then translates into sustainability and sustainability is a big part of how health is going to play out on the continent. And I think one other piece that’s, you know, really powerful for us from the health perspective is, again, when you think of, you know, connecting that first conversation we had around data and, you know, the second conversation around connectivity, there’s this piece of endogenous knowledge, and there’s a lot of endogenous knowledge in Africa that’s not pinned upon. And just to give you a very quick example, so I remember responding to Ebola in, you know, Sierra Leone, Liberia, and Guinea. And I remember, you know, there was this whole thing about isolation and stuff, and, you know, having this academic conversation of how are we going to do this at GDC. But all of a sudden, because we had this connectivity and we weren’t really, you know, networked within networks of communities, you know, it was the endogenous knowledge that drove this. Somebody from Sarah Lone said, hey, you don’t need to do this. We already do this. We already have isolation centers for our women and children, you know, when they go through measles or when they go through menstruation and stuff. So we literally, that was knowledge, that was data that existed that we could leverage on. So I think, you know, just to end, it’s, you know, life transforming and for us and the end of implementation, it, you know, drives three things, equity, endogenous knowledge and trust.

Moderator – Yusuf Abdul-Qadir:
Wow. We have a question online and Victor, I’m gonna direct this question to you. It comes to us from Daria Tamereva. She’s notes and asks, how could we effectively implement data for the humanitarian sector? We, and then another question, I’ll say the second one with you honorable, which asks, we lack digital skills. What can be done to remedy this situation? So Victor, the first question for you is, how could we effectively implement data for the humanitarian sector? And then a question on digital skills and remedying that for you, honorable.

Victor Ohuruogu:
Yeah, thank you. Thank you so much. So how could we implement data for the humanitarian sector? So for us, we, one of the things that we, you know, come to discover is that the, within the principles of, you know, leaving no one behind, we’ve come to realize the fact that the development sector straddled between the public sector and the private sector seems to be very much advanced as the private sector is, but within the public sector space, there’s a lot of capacity issue. And so our concern is. how do we bridge this capacity issue that enables the public sector, particularly coordinating with the development space in the private sector to respond more effectively to humanitarian issues when and when they do break out in Africa. And so we’re looking at a couple of things, particularly within the public sector space to strengthen their capacity in responding to humanitarian issues. Of course, when COVID broke out, it has, of course, humanitarian dimensions to it. And what did we do with a couple of countries across Africa? One is the fact that we did discover that infrastructure was a major, major challenge for the public sector in responding to issues such as humanitarian challenges. Infrastructure, such even as computing infrastructure that enables them to bring together all the key information and datasets that enables them to make quick decisions. So we were working with a number of the presidential tax policies on COVID-19 to, of course, bringing our private sector partners, looking at where infrastructure is needed for immediate deployment and where causing that partnership to occur. Second is the issue of capacity, knowledge, and skill in using this infrastructure and data to making the decisions that governments are needed to make at that point in time. And then we identify what are those capacity-related issues and brought together a set of our partners who are helping to train public sector officials across Africa in identifying and using the type of data that is needed to support governments in making decisions. We have partners like Grid3, who was working with a number of health institutions, the national statistical offices across Africa in a couple of countries to bring the sort of datasets that they need, both from the economic side, from the health side, you know, just, you know, mashing this data. data sets together to provide analytics and insights to enable government to making decision. The third area was the area of the capacity around understanding sovereignty issues with respect to data. This data that is needed for decision purposes are being collected from a set of people across various communities. We were opening the eyes of government to ensuring the fact that these people whose data is gonna be collected and used must have a say in the way that data is being collected from them and in the way that that data would eventually be used. And so we wanted to ensure that everyone whose data is being used must have a say, they have a right, their rights as well must be protected. But more importantly also we saw that a lot of folks from outside of Africa were all jumping into Africa, demanding for data from government, using that data without recourse to certain principles, sovereignty principles in those locations. And so we’re opening the eyes of government and strengthening the capacity to manage these data sets in terms of understanding that the first issue has to do with ownership. Government within those spaces where those data sets are being collected must exercise ownership over such data. We also wanted to be sure that those data sets need to also be located within the confines of those countries. There were countries that were willing to work but they insisted that the data sets must not leave the shores of their country, they must be used for the purposes for which they were collected for. And of course the issues of privacy and protection, how data moves from one country to the other, even building the capacity of governments in terms of governing that whole data space itself was a major issue. Governments across Africa in most instances, the public sector institutions that handle these processes don’t have real capacity within the issues of data governance. So we’re also helping to build those capacities to ensure that the data sets that are gonna be used are limited, you know, to the borders of that nation, but of course also are very much, you know, the people comply with the protection laws, the confidentiality issues that that data brings upon all of us. So in a nutshell, the humanitarian sector needs data and critically, how do we help them is also making sure that the data that they need can be available. It’s accessible in formats that they can take and use this data for quick decision-making. And so we are hoping that one of the things that governments in Africa would really focus on is in data infrastructure that enables data systems from across various institutions to be connected together and to grant, you know, better access to everyone, particularly within the government sector who needs to use these data for policy and decision-making. Thank you.

Moderator – Yusuf Abdul-Qadir:
Honorable, before you jump in, are there questions in the, if you have questions in the room, please line up and we’ll make sure to get you honorable and then we’ll have these two questions.

Samuel Nartey George:
Yes, please. Sorry, oh. Okay, I’m just gonna keep it short so you don’t have to stand for too long. Basically, when it comes to education, it’s a very simple process. We need to be able to work with civil society and the technical society to build the capacity and do the training programs. Members of parliament need to be able to, on our own, we can offer the training, we can give the training, but we can partner with civil society organizations and technical society to bring the skill sets to our constituents. So for example, I could put a quota from my constituency development fund towards acquisition of the skills and then run them as boot camps so that the local constituents are able to get some of these digital skills. So I think it’s going to, and then that’s just at the level of the member of parliament, but government as a whole must begin to look at how it can also run these programs. In Ghana, for example, we have the Girls in ICT program, which focuses on girls in. in second cycle schools, takes basically a road show to the schools, trains them in basic code writing and hacking and ethical hacking, and gives them some kind of digital skills. And so governments can be able to invest in those kinds of programs as well. But ultimately, the resource must come from private sector, civil society. We must be able to build those synergies and give those skills to the people.

Audience:
Thank you. My name is Jarell James. I founded something called the Internet Alliance. And I’ve done a number of cryptography projects before this. I work in a deeply technical field on emerging technologies around cryptography for quite a while. And so I’d like to ask a question specifically around leverage. And as someone who is a devout Pan-Africanist, I do think this is very relevant. When we talk about like Walter Rodney or Thomas Sankara, do we think that their values around African intellectualism, African ingenuity, and valuing that as a resource, do we feel that that is reflected in the way that African nations are leading their countries with development around data sharing? When Victor spoke about not letting data leave borders, I mean, in a lot of ways, when we work in cryptography, it’s like data can leave borders. But it can only be accessed by those who have direct cryptographic key access to that data. And there is this idea that Africans are just supposed to share everything that they have for the sake of development, for the sake of checks, for the sake of investment from expat. So my question is really around community networks and all of this stuff is really great. But is there ways and better approaches to leveraging data as an actual asset? Leveraging the fact that GSMA predicts the greatest boom and growth in internet users is going to be Africa and is Africa year over year. So when someone comes in and builds telecommunications infrastructure, and they choose not to go to a region that seems to be not profitable, do we feel that there is? room there to say, well, if you want access to this data, we have leverage over you. Can we develop more? I just want to hear some thoughts on this. Because it seems like we’re operating from a very subservient position there.

Moderator – Yusuf Abdul-Qadir:
I love the question. I’m going to take the second one, and then we’ll try to get both.

Audience:
OK. Sorry. OK, so my name is Emmanuel from Togo. So mine is more like a contribution, because recently I worked with APC and other organizations on a report regarding data protection on the continent. And what we noticed is that on the continent, we are leapfrogging. Like, we are collecting too much data. I mean, in our countries now, you see the telcos are collecting. The hospitals are collecting. Everybody’s collecting data. So the consequences in the future can be very huge with all the emerging technologies we are seeing today, like AI. So the consequences for the continent we have to be careful. They will be very, very huge. And it is important for us to develop our data protection laws on a human-based approach, human-rights-based approach. Because most data protection laws on the continent today are not developed on a human-rights-based approach. And by human-rights principles, I mean the accountability. There are a lot of data breaches in Africa today, but who do we hold accountable? So there’s that accountability. There’s also the discrimination, equality, empowerment, and legality. I know a lot of countries in Africa today are actually enacting data protection laws for the sake of a check. So it’s something that we have to actually also see that, is it really necessary for us to collect this kind of data? Because we collect two main data. If I take the voters, for example, in my country, they collect their biometric data. They take their picture. They take their 10 fingers. They take all those data, but the government does not have any infrastructure locally to store those data. So those data are somewhere in Belgium with a private company. Nobody has access to those contracts to see the accountability level of those type of contracts. So 3 million voters have their data with a private company somewhere in the world. So those are some of the aspects that we have to also look at. I know in West Africa now, they are building a regional data registry for countries like Benin, Burkina Faso, Togo. where the World Bank has actually put in more than 300 million dollars to actually build that registry. But the problem is that our government usually, because if I take the case of Togo, they took that check, and before taking the check, the prerequisite was to vote a data protection law. They voted the law, but there’s no agency to implement the law. So there’s no need to have a law if we cannot implement it. So those are some of the things that we have to look at when we are actually voting those laws. We have to be able to implement it. We have to be able to actually fight for our data rights. We have to know who has access to it, to what level, can we correct it, and all those kind of mechanisms, we have to put them in place before going for those checks. Thank you.

Moderator – Yusuf Abdul-Qadir:
Honorable, if we could do this in a rapid fire, and I would be remiss if we didn’t afford the folks who asked the questions to continue the discussion. So we’ll continue the conversation. We’ll probably have to walk outside to do it, but we’d be happy to do so. Honorable.

Samuel Nartey George:
I honestly wish these questions came like 30 minutes ago, and I agree with you. Africa doesn’t know what we’re sitting on. We’re being exploited. Most times when we talk about exploitation in Africa, we think of just the natural resources, but data is being exploited big time. It’s being exploited because the big demographics are sitting on the African continent, and we have leaders who just don’t understand the whole economics of data, and it’s a big problem. And I think that there’s an awakening coming, and the point you just made, and this morning’s parliamentary track, it was a point I made to the panel. I said to them, it appears as though we come to these platforms, and there’s a checkbox that countries need to tick. So we need to have data protection laws. We need to have cybersecurity legislation. We run back, we go past the legislation, and then we get good ratings by international organizations. What they don’t do is then find out, Africa has some of the best. legislation, but implementation is zero. So there should actually be a matrix of checking implementation of legislation that’s been passed. Because, for example, you have Egypt that has a data protection law, but there is zero implementation of data protection in Egypt. And so there is no value to the citizenry there. Nigeria had a data protection law and just only three months ago set up a commission. So there are real issues here, but the international community is interested in saying, oh, this country has passed the data protection law, they’re doing a great job. And because we want to please Western capitals, because of the corruption of African leaders, we’re unable to actually deal with what is really requisite. But I think we’re running out of time. We’ll continue this conversation, but I think that we need a new generation of African leadership that knows that our data is critical and we need to hold it.

Moderator – Yusuf Abdul-Qadir:
With that, we are at time. I want to thank the panelists here for all of the conversation that you helped to drive us towards. Give the panelists a round of applause. For the folks who are online, thank you for joining. Unfortunately, we’ve got to go, but we will continue the conversation and we look forward to having you all join us at our booth. It’s in the main event hall, excuse me, in the main exhibition hall. You can find us at ACIP.org. We’re next to the kimonos, apparently, so grab a kimono and talk with us. And Dr. Smith, did you want to say anything before we close?

Dr. Smith:
Well, yeah, I just wanted to say quickly to the brother, because Pan-Africanism is about recognizing our humanity. The idea of a United States of Africa relates to your question, and I don’t think it’s as futuristic as it seems. It’s actually an idea that has been talked about from the OAU. So I think Walter Rodney’s idea of Pan-Africanism is really Africans at the grassroots level. While we need the politicians, they have the responsibility. and roles, we cannot achieve this goal of a global Africa, of a United States of Africa, without the mass mobility of the young people, of grassroots people. And it’s important to leverage the technologies that we have to move this social movement of a United States of Africa forward. And we can achieve it.

Moderator – Yusuf Abdul-Qadir:
Perfect ending. Thank you. Thank you. Thank you. Thank you. Thank you. you

Audience

Speech speed

198 words per minute

Speech length

937 words

Speech time

285 secs

Dr. Smith

Speech speed

136 words per minute

Speech length

470 words

Speech time

207 secs

Kwaku Antwi

Speech speed

165 words per minute

Speech length

1085 words

Speech time

396 secs

Lee Mcknight

Speech speed

166 words per minute

Speech length

750 words

Speech time

272 secs

Moderator – Yusuf Abdul-Qadir

Speech speed

191 words per minute

Speech length

2192 words

Speech time

689 secs

Samuel Nartey George

Speech speed

186 words per minute

Speech length

2513 words

Speech time

809 secs

Usman Alam

Speech speed

195 words per minute

Speech length

1538 words

Speech time

472 secs

Victor Ohuruogu

Speech speed

180 words per minute

Speech length

1768 words

Speech time

589 secs

Policy Network on Artificial Intelligence | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sarayu Natarajan

Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society as false information can be easily created and spread through the internet and digital platforms. However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively.

Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation. This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated and implementing legal measures accordingly. This approach acknowledges the importance of tailored solutions based on a solid legal framework.

The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south. These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.

Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI. Smaller organizations that work within specific communities are actively involved in creating these language models. This represents a positive step towards inclusivity and accessibility in the field of AI, particularly in underrepresented communities and non-mainstream languages.

Mutual understanding and engagement between AI technology and policy domains are crucial for effective governance. It is essential for these two disciplines to communicate with each other in a meaningful way. Creating forums that facilitate non-judgmental discussions and acknowledge the diverse empirical starting points is critical. This allows for a more integrated and collaborative approach towards harnessing the benefits of AI technology while addressing its ethical and societal implications.

While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs. Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. It is important to consider the potential benefits and challenges associated with AI technology and strive for humane conditions for workers in different parts of the world.

In conclusion, the advent of generative AI has made it easier and cheaper to produce and disseminate misinformation and disinformation, posing negative effects on society. However, the rule of law, through proper legal protections, plays a significant role in curbing the spread of false information. A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue. Inclusivity, diversity, and mutual understanding between AI technology and policy domains are crucial considerations in the development and governance of AI. It is essential to closely monitor the impact of AI on job loss and ensure fair working conditions for all.

Shamira Ahmed

The analysis focuses on several key themes related to AI and its impact on various aspects, including the environment, data governance, geopolitical power dynamics, and historical injustices. It begins by highlighting the importance of data governance in the intersection of AI and the environment. This aspect is considered to be quite broad and requires attention and effective management.

Moving on, the analysis advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. It emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI. By adopting a decolonial approach, it is believed that these injustices can be addressed and a more equitable and just AI landscape can be achieved.

Furthermore, the analysis highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future. This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion. It emphasizes the need for a balanced approach that takes into account the needs of the environment and all stakeholders involved.

In addition, the analysis underscores the importance of addressing historical injustices and promoting interoperable AI governance innovations. It emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard. By doing so, it aims to create an AI governance framework that is inclusive, fair, and capable of addressing the challenges associated with historical injustices.

Overall, the analysis provides important insights into the complex relationship between AI and various domains. It highlights the need to consider historical injustices, power imbalances, and environmental concerns in the development and deployment of AI technologies. The conclusions drawn from this analysis serve as a call to action for policymakers, stakeholders, and researchers to work towards a more responsible, equitable, and sustainable AI landscape.

Audience

The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge highlighted was the difficulty in capacity building due to the rapidly changing nature of AI. It was observed that AI is more of an empirical science than an engineering product, meaning that researchers and designers often don’t know what to expect due to continual testing and experimentation. Misinformation and the abundance of information sources further exacerbate the challenges in capacity building.

The importance of providing education to a diverse range of demographics, from school children to the elderly, was also emphasised. It was recognised that ensuring high-quality education in the field of AI is vital in equipping individuals with the knowledge and skills required to navigate the rapidly evolving technological landscape. This education should be accessible to all, regardless of their age or background.

Additionally, the panel discussion shed light on the blurring boundaries between regulatory development and technical development in AI and other digital technologies. It was noted that the political domain of regulatory development and the technical domain of standards development are increasingly overlapping in the field of AI. This convergence presents unique challenges that necessitate a thoughtful approach to ensure both regulatory compliance and technical excellence.

Furthermore, the role of standards in executing regulations in the context of AI was discussed. The panel emphasised that standards are becoming an essential tool for implementing and enforcing regulations. Developing and adhering to standards can help address challenges such as interoperability, transparency, and accountability in AI systems.

The need for capacity building was also emphasised, allowing a broader stakeholder community to engage in the technical aspects of AI, which have become integral to major policy tools. The panel acknowledged that empowering a diverse and inclusive group of stakeholders, including policymakers, experts, civil society representatives, academics, and industry professionals, is crucial for the development and governance of AI technology.

The process of contributing to AI training and education through UNESCO was discussed, highlighting the involvement of a UNESCO member who distributes AI research materials and textbooks to universities, particularly in developing countries. This partnership and knowledge-sharing initiative aim to bridge the global education gap and ensure that AI education is accessible to all.

The assessment of AI systems was deemed crucial, with recognition that assessing non-technical aspects is as important as evaluating technical performance. This includes considering the wider societal impact, such as potential consequences on workers and the categorisation of people. The panel emphasised the need for assessment processes to go beyond technical measures and include potential unintended consequences and ethical considerations.

Furthermore, it was acknowledged that the assessment of AI systems should extend beyond their current context and consider performance in future or “unbuilt” scenarios. This reflects the need to anticipate and mitigate potential negative outcomes resulting from the deployment of AI technology and to ensure its responsible development and use.

In conclusion, the panel discussion provided valuable insights into the challenges and opportunities associated with AI technology. The rapidly changing nature of AI necessitates continuous capacity building, particularly in the education sector, to equip individuals with the necessary skills and knowledge. Moreover, the convergence of regulatory and technical development in AI requires a thoughtful and inclusive approach, with standards playing a critical role in regulatory compliance. The assessment of AI systems was identified as a key area, underscoring the importance of considering non-technical aspects and potential societal impacts. Overall, the discussion emphasised the need for responsible development, governance, and stakeholder engagement to harness the potential of AI technology while mitigating its risks.

Nobuo Nishigata

The analysis reveals several key points regarding AI governance. Firstly, it emphasizes the importance of striking a balance between regulation and innovation in AI initiatives. This suggests that while regulations are necessary to address concerns and ensure ethical practices, there should also be room for innovation and advancement in the field.

Furthermore, the report highlights the need for AI policy development to take into consideration perspectives and experiences from the Global South. This acknowledges the diverse challenges and opportunities that different regions face in relation to AI adoption and governance.

The analysis also discusses the dual nature of AI technology, presenting both risks and opportunities. It underscores the significance of discussing uncertainties and potential risks associated with AI, alongside the numerous opportunities it presents. Additionally, it highlights the potential of AI to significantly contribute to addressing economic and labor issues, as evidenced by Japan considering AI as a solution to its declining labour force and sustaining its economy.

Another noteworthy point raised in the analysis is the recommendation to view AI governance through the Global South lens. This suggests that the perspectives and experiences of developing nations should be taken into account to ensure a more inclusive and equitable approach to AI governance.

The analysis also provides insights into the ongoing Hiroshima process focused on generative AI. It highlights that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Notably, the report suggests support for this approach, emphasising the importance of a code of conduct in addressing concerns such as misinformation and disinformation.

Flexibility and adaptability in global AI governance are advocated for in the analysis. It argues that AI is a rapidly evolving field, necessitating governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.

Collaboration and coordination between organisations and governments are seen as crucial in AI policy-making, skills development, and creating AI ecosystems. The analysis suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building.

The importance of respecting human rights, ensuring safety, and fostering accountability and explainability in AI systems are also highlighted. These aspects are considered fundamental in mitigating potential harms and ensuring that AI technologies are used responsibly and ethically.

In addition to these main points, the analysis touches upon the significance of education and harmonisation. It suggests that education plays a key role in the AI governance discourse, and harmonisation is seen as important for the future.

Overall, the analysis brings attention to the multifaceted nature of AI governance, advocating for a balanced approach that takes into account various perspectives, fosters innovation, and ensures ethical and responsible practices. It underscores the need for inclusive and collaborative efforts to create effective AI policies and systems that can address the challenges and harness the opportunities presented by AI technology.

Jose

The analysis of the speakers’ points highlights several important issues. Representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance.

A significant concern raised in the analysis is the intricate link between labour issues and advancements in the tech industry. In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times. This highlights the need to address the adverse effects of tech advancements on workers’ well-being.

The impact of the tech industry on sustainability is another topic of debate in the analysis. There are concerns about the interest shown by tech leaders in Bolivia’s minerals, particularly lithium, following political instability. This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction.

The use of biometric systems for surveillance purposes comes under scrutiny as well. In Brazil, the analysis reveals that the criminal system’s structural racism is being automated and accelerated by these technologies. This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance.

There is a notable push for banning certain systems in Brazil, as civil society advocates for regulations to protect individuals’ rights and privacy in the face of advancing technology. This highlights the need for robust governance and regulation measures in the tech industry to prevent harmful impacts.

The global governance of AI is also a point of concern. The analysis highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives. This emphasizes the importance of global collaboration and cooperation to ensure ethical and responsible use of AI technologies.

Countries from the global south argue for the need to actively participate and push forward their interests in the governance of AI technologies. Forums like BRICS and G20 are suggested as platforms to voice these concerns and advocate for more inclusive decision-making processes.

The analysis also sheds light on the issue of inequality in the global governance of technology. It is observed that certain groups seem to matter more than others, indicating the presence of power imbalances in decision-making processes. This highlights the need for addressing these inequalities and ensuring that all voices are heard and considered in the governance of technology.

Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on indigenous groups. The example of the Anamames ethnicity in the Brazilian Amazon suffering due to the activities of illegal gold miners underscores the need for responsible and sustainable practices in resource extraction to protect the rights and well-being of indigenous populations.

Lastly, tech workers from the global south advocate for better working conditions and a greater say in the algorithms and decisions made by tech companies. This emphasizes the need for empowering workers and ensuring their rights are protected in the rapidly evolving tech industry.

In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, governance, and the impacts on various stakeholders. It underscores the need for deeper understanding, robust regulation, and inclusive decision-making processes to tackle challenges and ensure that technology benefits all.

Moderator – Prateek

The Policy and Analysis Initiative (P&AI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2020-2022 in Addis Ababa. The P&AI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.

The first report produced by the P&AI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.

One of the noteworthy aspects of the P&AI is its working spirit and commitment to a multi-stakeholder approach. The working group of P&AI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.

Prateek, an individual interested in understanding the connectivity between AI governance and internet governance, sought insights on the interoperability of these two domains. To gain a better understanding of the implications of internet governance on AI governance, Prateek engaged Professor Xing Li and requested a comparison between the two in terms of interoperability.

During discussions, Jose highlighted the need for a deeper understanding of local challenges faced in the Global South in relation to AI. This includes issues concerning labor, such as the impacts of the tech industry on workers, as well as concerns surrounding biometric surveillance and race-related issues. Jose called for more extensive debates on sustainability and the potential risks associated with over-reliance on technological solutions. Additionally, Jose stressed the underrepresentation of the Global South in AI discussions and emphasized the importance of addressing their specific challenges.

In the realm of AI training and education, Pradeep mentioned UNESCO’s interest in expanding its initiatives in this area. This focus on AI education aligns with SDG 4: Quality Education, and UNESCO aims to contribute to this goal by providing enhanced training programs in AI.

In a positive gesture of collaboration and sharing information, Prateek offered to connect with an audience member and provide relevant information about UNESCO’s education work. This willingness to offer support and share knowledge highlights the importance of partnerships and collaboration in achieving the goals set forth by the SDGs.

In conclusion, the Policy and Analysis Initiative (P&AI) is a policy network that aims to address AI and data governance matters. Their first report focuses on various aspects of AI governance, including gender and race inclusion and a just twin transition. Their multi-stakeholder approach ensures diverse perspectives are considered. Discussions during the analysis highlighted the need to understand local challenges in the Global South, the significance of AI education, and the connectivity between AI and internet governance. Collaboration and information sharing were also observed, reflecting the importance of partnerships in achieving the SDGs.

Maikki Sipinen

The Policymakers’ Network on Artificial Intelligence (P&AI) is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. It emerged from discussions held at the IGF 2020-2022 meeting in Addis Ababa last year, where the importance of these topics was emphasised. The P&AI report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like P&AI.

One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions. The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. The report points to the success of the Finnish AI strategy, highlighting how it managed to train over 2% of the Finnish population in the basics of AI within a year. This serves as strong evidence for the feasibility and impact of introducing AI education in schools and universities.

Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance. The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions. By enhancing the knowledge and understanding of those responsible for making policy decisions, there is an opportunity to shape effective and responsible AI governance frameworks.

Diversity and inclusion also feature prominently in the report’s arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.

Additionally, the report consistently underscores the significance of capacity building throughout all aspects of AI and data governance. It is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.

In conclusion, the P&AI report serves as a valuable resource in highlighting the importance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader goal of establishing responsible and fair global AI governance frameworks.

Owen Larter

The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner. This approach is aligned with SDG 9 (Industry, Innovation and Infrastructure), SDG 10 (Reduced Inequalities), and SDG 17 (Partnerships for the Goals). Microsoft has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices.

Owen, another speaker in the analysis, emphasises the importance of transparency, fairness, and inclusivity in AI development. He advocates for involving diverse representation in technology design and implementation. To this end, Microsoft has established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.

Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology. Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community. By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI.

However, it is crucial to strike a balance between openness and safety/security in AI development. Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models. The analysis suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse.

Furthermore, the need for a globally coherent framework for AI governance is emphasised. The advancement of AI technology necessitates establishing robust regulations to ensure its responsible and ethical use. The conversation around global governance has made considerable progress, and the G7 code of conduct, under Japanese leadership, plays a crucial role in shaping the future of AI governance.

Standards setting is proposed as an integral part of the future governance framework. Establishing standards is essential for creating a cohesive global framework that promotes responsible AI development. The International Civil Aviation Organization (ICAO) is highlighted as a potential model, demonstrating the effective implementation of standards in a complex and globally interconnected sector.

Understanding and reaching consensus on the risks associated with AI is also deemed critical. The analysis draws attention to the successful efforts of the Intergovernmental Panel on Climate Change in advancing understanding of risks related to climate change. Similarly, efforts should be made to comprehensively evaluate and address the risks associated with AI, facilitating informed decision-making and effective risk mitigation strategies.

Investment in AI infrastructure is identified as crucial for promoting the growth and development of AI capabilities. Proposals exist for the creation of public AI resources, such as the National AI Research Resource, to foster innovation and ensure equitable access to AI technology.

Evaluation is recognised as an important aspect of AI development. Currently, there is a lack of clarity in evaluating AI technologies. Developing robust evaluation frameworks is crucial for assessing the effectiveness, reliability, and ethical implications of AI systems, enabling informed decision-making and responsible deployment.

Furthermore, the analysis highlights the importance of social infrastructure development for AI. This entails the establishment of globally representative discussions to track AI technology progress and ensure that the benefits of AI are shared equitably among different regions and communities.

The analysis also underscores the significance of capacity building and actions in driving AI development forward. Concrete measures should be taken to bridge the gap between technical and non-technical stakeholders, enabling a comprehensive understanding of the socio-technical challenges associated with AI.

In conclusion, responsible AI development requires a multi-faceted approach. It involves developing AI in a sustainable, inclusive, and globally governed manner, promoting transparency and fairness, and striking a balance between openness and safety/security. It also necessitates the establishment of a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure. Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the socio-technical challenges posed by AI. By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.

Xing Li

The analysis explores various aspects of AI governance, regulations for generative AI, the impact of generic AI on the global south, and the need for new educational systems in the AI age. In terms of AI governance, the study suggests that it can learn from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments. The shift from a US-centric model to a global model in internet governance is viewed positively and can serve as an example for AI governance.

The discussion on generative AI regulations focuses on concerns that early regulations may hinder innovation. It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance.

The analysis also highlights the opportunities and challenges presented by generic AI for the global south. Generic AI, consisting of algorithms, computing power, and data, has the potential to create new opportunities for development. However, it also poses challenges that need to be addressed to fully leverage its benefits.

Regarding education, the study emphasises the need for new educational systems that can adapt to the AI age. Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world.

Finally, the analysis supports the establishment of a global AI-related education system. This proposal, advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.

In conclusion, the analysis highlights the importance of drawing lessons from internet governance, balancing regulations to foster innovation in generative AI, addressing the opportunities and challenges of generic AI in the global south, and reimagining education systems for the AI age. These insights provide valuable considerations for policymakers and stakeholders shaping the future of AI governance and its impact on various aspects of society.

Jean Francois ODJEBA BONBHEL

The analysis provides different perspectives on the development and implementation of artificial intelligence (AI). One viewpoint emphasizes the need to balance the benefits and risks of AI. It argues for the importance of considering and mitigating potential risks while maximizing the advantages offered by AI.

Another perspective highlights the significance of accountability in AI control. It stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior.

Education is also emphasized as a key aspect of AI development and understanding. The establishment of a specialized AI school in Congo at all educational levels is cited as evidence of the importance placed on educating individuals about AI. This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape.

The analysis suggests that AI development should be approached with careful consideration of risks and benefits, control mechanisms, and education. By adopting a comprehensive approach that addresses these elements, AI can be developed and implemented responsibly and sustainably.

A notable observation from the analysis is the emphasis on AI education for children. A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI. The program’s focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.

Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future generations. The proposed solution involves providing options, solutions, and education on technology to empower young people and prepare them for the technologically advanced world they will inhabit.

In conclusion, the analysis underscores the importance of striking a balance between the benefits and risks of AI, ensuring accountability in AI control, and promoting education for a better understanding and access to AI innovations. By considering these facets, the responsible and empowering development and implementation of AI can be achieved to navigate the evolving technological landscape effectively.

Session transcript

Moderator – Prateek:
Good morning, everyone. To those who have made it early in the morning, after long days and long karaoke nights that all of us have been having here in Kyoto, welcome to this session on the launch of the report of the Policymakers’ Network on Artificial Intelligence, which was set up by the IGF. I would briefly mention the names of the esteemed panelists here before handing the floor to Mikey to introduce the Policymakers’ Policy Network a bit. We have Mr. Nobuo Nishigata from the Japanese Ministry as the representative of the host country with us. We have Mikey Sipinen, who is the editor for, and the, how do you say, editor for this report. With us we have Jose Renato, who is joining us from Brazil. We have Sarayu Natarajan, who is the co-founder of Apti Institute in India. We have Professor Xing Li from Tsinghua University in China. We have Mr. Owen Larter from Microsoft. And we have Jean-Francois Bombel, who is an expert on artificial intelligence and capacity building, I must say. And I am Pratik Sibal. I’m a program specialist at UNESCO. And I would also like to recognize our online moderator, Ms. Shamira. We don’t see her yet, but she will be also joining us in the discussion, especially on the work that she’s been doing on environment. And the two co-facilitators for this work that we have with us, we have Amrita Chaudhary and we have Odas with us, who are here. So Mikey, I’ll pass on the floor to you first to introduce what was the reasons for setting up this working group. How did the work progress? What is it that the multi-stakeholder community at the IGF was able to achieve? So, over to you.

Maikki Sipinen:
Thanks, Prateek, and a warm welcome to this early morning session to all of you also on behalf of the P&AI community. My name is Maikki Sipinen, and I’m the coordinator of P&AI. I’m not going to take too much time away from our expert panelists describing the process that led us here, but important to know that the P&AI is a really new thing. It’s only about six months old policy network, a toddler, should we say, and the P&AI was actually born from the messages of IGF 2020-2022 that was held in Addis Ababa last year. So, this is a nice example that the discussions we have here at the IGF meeting actually are very important and can result in concrete new things like the P&AI, so that’s quite inspiring. So, P&AI addresses policy matters related to AI and data governance, and we have today gathered here to discuss and debate and maybe even later challenge P&AI’s very first report. And for those of you who didn’t have yet the chance to have a look at the report, you can find a link to it in this session’s information page in the agenda. And what else? Well, many, many, many people have worked super hard to make this session and… especially to make this P&AI report come into existence, and especially our excellent drafting team leads, and I know they are listening in and joining this session online from different parts of the world. Some of them have woken up at 1 a.m. or 2 a.m. to tune in, so that’s a really nice example of the P&AI spirit. But I would like to hand it back to you, Prateek, to get us started with our expert speakers.

Moderator – Prateek:
Thanks, Maike. I can definitely attest to the fact that the working group, it’s in the true spirit of multi-stakeholderism at the IGF that this working group was formed, and then the way they’ve worked through open, first identifying what themes to cover through an open consultation, and then they’ve worked through streamlining through information meetings, inviting speakers to talk about different topics, and then collaboratively drafted this report. So congratulations to the authors, the lead authors, the team leads, and the others who contributed. So the report is available on the website of the IGF. I would encourage you to go through it. It’s a fantastic product of collaborative effort. In the first report that we have launched today, we have three themes. The first theme is talking about interoperability of AI governance, and this is primarily focusing on convergence and divergence among different regulatory initiatives with respect to artificial intelligence. So the group has mapped various initiatives in AI governance from the EU to China to the US to Latin America to Africa, and their intention has been to put forward countries or discourse that has not been so represented in the global discussions on AI forward. So they’ve centered a lot of the Global South initiatives in this report. The second theme covered by the report is they basically tried to frame the AI life cycle for gender and race inclusion. Some of the questions that they’re asking over there are, do AI systems and harmful biases reinforce racism, sexism, homophobia, transphobia in societies? These are particularly important questions that the researchers have focused on. And then finally, the third section of the report really talks about governing AI for a just twin transition. And when we say twin transition, it’s the digital and the environmental transition. And this section really explores the intersection of AI data governance and the environment. So having talked briefly about the report, I would first invite our host country representative, Mr. Nobuo Nishigata, to say some opening remarks and also perhaps contextualize a little bit the discussions around generative AI, which kind of prompted this reflection on artificial intelligence governance through a multi-stakeholder perspective. Over to you, sir.

Nobuo Nishigata:
Good morning, good afternoon, good evening to the online participants wherever you are. My name, thanks for the kind introduction. My name is Nobuo Nishigata from the Japanese government. I work at the Ministry of Internal Affairs and Communications. And I’m doing the division director there. And I joined this network in maybe July this year. And first of all, congratulations to you all to launch the report. Just say it’s a very young organization to do this, but frankly, I was very much impressed by the content of the report. So, and I understand that this work continues beyond this IGF. So we are looking forward to working with you together further. And then just a couple of things I’d like to mention, maybe just from the content of the current report. So I understand that this report is not a comprehensive analysis type of report. Rather, this is more like having the fresh angle to what we have for AI and what we have to do for AI policy development, those kind of things. Then just let me compare that with my previous work, since I used to work at the OECD in Paris, and I was in the team who developed the council recommendation on artificial intelligence in 2019. You are the first intergovernmental kind of policy standard at that time. Then I had four years of experience out there, and then compared to that report, for example, just Pratik introduced the three main themes on the report. The first one was the interoperability in AI governance. This kind of resonates with what we had at the G7. Japan hosted the G7 meeting this year. Then we had, in April, the ministerial meeting for digital and tech ministers. And then one of the major topics out there was the interoperability of the AI governance. Then for the G7 members, interoperability means that we know that in Europe, the negotiation for the agreement for the AI Act is taking place. On the other hand, actually Japan is the first country to propose the AI policy discussion in the G7 in 2016. It has been kind of getting a long history right now. but the G7 members continue to discuss on what we should do for the better AI, trustworthy AI, those kind of things. So then, getting back to the point of interoperability, on one side of this planet, the countries are working hard to establish new legislation on AI, but on the other hand, for Japan, we don’t think we need the legislation right now on AI. We need more innovation, we want to look at the possibility of what AI can do for us, because, for example, Japan is facing a severe problem of losing the population. So, we’ve already seen the decrease in the labor force in our country, so we need more machines to sustain our economy. So that was a point back in 2016, and we asked the G7 members to discuss further on AI, because we already knew that there could be some uncertainty or risks brought by that technology, while looking at the many, many opportunities there. So that’s the reason that we wanted to start the discussion, and it goes to the OECD, UNESCO, and many organizations right now. So, it’s a great turnaround, actually. Then, for this report, I would say it’s a much wider focus. I mean, not a focus, but a wider perspective. Many different perspectives on interoperability, and we can see some commonalities, but also the differences. This is a great point, but this network discusses AI policies through the global south lens, and this is a point that the G7 doesn’t have, actually. So, to me, it’s a very refreshing thing. And maybe about the third topic of this report, this is about, of course, it’s on the environment, but of course, it deals with the data governance, right? And then while I was at the OECD, then my colleague was just launching the recommendation on enhanced access to data and the sharing of the data. And to me, that recommendation made sense a lot, but on the other hand, the same thing, once we got the real case study within this report, then of course, we saw some similarity between the report and the case study and the council recommendation from the OECD, but on the other hand, we saw some difference. And this is just brought again by the Global South Lens, so this is great, and then maybe I should stop here. So then maybe just touch on, maybe I should be back on this point, but just flagging that this year, the G7 leaders, actually, I talked about the ministerial meeting, but the ministerial meeting, the declaration went escalate up to the leaders summit this year, and then the G7 agreed to establish what we call the Hiroshima AI process, and this is more focused on the generative AI, and just taking stocks. And as well as try to identify the challenges and risks, of course, as well as the opportunities brought by this new technology, so.

Moderator – Prateek:
Thank you, sir. So one key takeaway before we come to other panelists is that this report can inform some of the G7’s work, which is coming up from a Global South perspective. I think that would be a fantastic outcome for the work that has been done here. And I just wanted to, I’ll come back to the Hiroshima process in a bit. I wanted to open the floor a little bit on generative AI, and one of the issues the report talks about is around potential monopolization around this technology. And they raise questions around how can we make generative AI systems and development more open, transparent, accountable. And I wanted to come to you, Owen, to hear your perspective on how can generative AI systems be developed in a more open, transparent way, and what is Microsoft doing in this domain? For about three minutes.

Owen Larter:
Thank you very much, and great to be here. So I’m Owen Lata from Microsoft. It’s a pleasure to be here, and congratulations on such a thoughtful report, which I think really does hit on three of the really important issues that we need to get right with artificial intelligence. We need to make sure that we’re governing this globally. We need to make sure that we’re doing this in a sustainable way, and we need to make sure that we’re doing it in an inclusive fashion. We’re very enthusiastic about AI at Microsoft, as you can probably imagine. So the generative AI that you talk about we think is gonna be very powerful in helping people be more productive in their day-to-day lives. So we have our Microsoft co-pilots, which are helping people be more productive in using our Microsoft Office technologies. We also think that this technology is just gonna be a huge opportunity in helping people better understand and manage complex systems. I think you ask a really good question about how to make sure we’re building this technology in an inclusive fashion. And one of the things that we’re really mindful of at Microsoft is hitting these fairness goals, doing things in an inclusive fashion. So that starts by having really diverse teams at Microsoft that are building these technologies. So part of our Responsible AI program at Microsoft is our Responsible AI Standard. We have three goals in there, which are our fairness goals, F1, F2, F3. People can go and see our Responsible AI Standard, which is a public document that we’ve shared so that others can critique it and build on it. And a key part of these goals is making sure that we’re bringing together people from a diversity of backgrounds to build these systems. So people with research backgrounds, people with engineering expertise, people that have worked on products, people with legal and policy backgrounds, and people that have worked on issues like sociology, like anthropology, so we have a really diverse set of inputs into how a technology or a system is being designed. I think more broadly, beyond that, there’s a really big question on how we make sure that we’re having a sort of representative conversation around governance as well. So we’ve been doing some work at Microsoft to try and broaden the range of inputs that we get into our Responsible AI program. We have a Responsible AI Fellowship program that we’ve set up. We’ve been running this for about a year now, and this is really pulling together some of the brightest minds from across the global South working on responsible AI issues to help inform the way that we are designing the technology, but also designing our governance program. So we have fellows from Nigeria, Sri Lanka, India, Kyrgyzstan. These have been really rich conversations to hear about how others across the world are thinking about these technologies and how to use them responsibly, and so we look forward to taking that work forward.

Moderator – Prateek:
If I can press you a little bit on this point about openness versus closed AI development, we have seen several open-source initiatives, and there are several which are not. How would you weigh in on this debate?

Owen Larter:
Yeah, it’s a great question, and I think open-source is really important. I think open-source is gonna be really important to helping advance an understanding of how to use this technology safely. I think it’s also gonna be really, really important in making sure that we’re distributing the benefits of this technology in a broad way. I think open-source can play a really important role there. So we’re very supportive of open-source. We’re a big contributor to the open-source community. We open-source a number of our models. GitHub, which people might be familiar with, is a Microsoft company that has been a big open-source ethos in its spirit. I think there are some questions around the trade-off between openness and safety and security at a certain level, and I think really highly capable models, what sometimes people refer to as frontier models, which are sort of at the highest end of capabilities of what we have today or beyond, I think there are real questions there around whether it makes sense to open source those, or if you are gonna open source them, or to explore different ways of making these models available, perhaps having some kind of middle path where you don’t necessarily release the model weights, but you advance greater access to the technology. So I think attention there, but I think it’s really important that we appreciate that open source will be a really important part of the discussion going forward.

Moderator – Prateek:
Thanks, Irwin, for those thoughts. And I think perhaps this is something that is food for thought also for the next work plan of this group to think about open source models and how can that be integrated in the policy discussions further. Soraya, I wanted to turn to you also on this question around generative AI. And the report also talks about some of the potential risks and harms to democracy, human rights, rule of law, and so on. And one that I would like to focus on is disinformation and misinformation. Can you share with us what are the ways in which generative AI systems can be used towards spreading disinformation? And what could be some ways in which we could address this?

Sarayu Natarajan:
Thank you very much. Thank you to the audience for being here and the online audience. I’m assuming there are a range of competing factors for you, dinner, lunch, sleep. So thank you very much for being a part of this conversation. Congratulations also to the team that’s written the report. It’s been through most of it and it’s a fantastic report. It has a cadence and a thoughtfulness that comes from collaborative work. and it was absolutely wonderful to read, so congratulations on that. Delving into the specific question on misinformation and disinformation, I think it’s critical to understand first how generative AI can enable the creation of misinfo and disinfo. What generative AI does, or what the capabilities of generative AI can imply is that the cost of generating content, which is the base of misinfo or disinfo, is basically zero. So if you are capable of writing the right kind of query or code, it’s quite easy to generate information. As language models are available in very many other languages, this capability is also therefore available in several languages. So what has happened through generative AI is the reduction of the cost of content production to zero. The internet and digital transmission in general has reduced the cost of transmission to zero. So when you put those two together, there’s absolutely no friction to the production and dissemination of problematic content. And problematic content, I mean there are several typologies in the literature, I mean between misinfo and disinfo, there are differences, and the consequences of these can also be manifold. In terms of reduction, I think, or stemming or curbing misinformation and disinformation, I think the report, while may not specifically focusing on these areas, does put in place or talk about several approaches that could be used. One of course is thinking about AI or AI, generative AI is embedded in specific contexts and taking a very context-specific lens to the stemming of misinfo, disinfo. Which means understanding both the context in which this is generated, so is it corporate, is it the state? who’s responsible for the generation of information, and then also spending time to understand how this disinformation process works. But all of this, the takeaway I would say is that in the absence of broader protections embedded within the law, and I say this carefully, are conscious that misinformation, disinformation is a polemical topic in its own right. But the rule of law as a guiding frame within which any inquiry about how to stem this problem might be the right approach to start with.

Moderator – Prateek:
Thanks, Rahim, for that. Nobu, you’ve mentioned briefly the Hiroshima process and that it is going to focus on generative AI. We’ve heard that quite a bit over the past three, four days. Can you give us some specifics of what is it that it is looking at? What are the kind of principles? I don’t know if it’s advanced enough for you to share that, but can you shed some more light on what they’re going to come up with?

Nobuo Nishigata:
Maybe a couple points until now. I mean, that the process has not ended yet, and actually the G7 delegation taskforce teams, the very hard negotiation they engaged in and then tried to finalize a report back to the leaders by the end of this year. So, but still, though, as interim, maybe I can introduce that the G7 ministers agreed on the ministerial declaration for the interim. It just published in early September, I think the 7th of September this year. Then, like a couple of things, I mean, the discussion is more having more focus on the code of conduct from the private sector. That’s the first one. So it’s more like a voluntary things. but on the other hand, we have some discussion particularly just we are talking about the misinformation, disinformation type of things and then we have some discussion about the watermarking. Maybe they’re very aligned with what she said about the proof or just made by AI, those kind of things. So maybe we are in a good alignment, I would say.

Moderator – Prateek:
Thanks. Those specifics help. So I wanted to move to this major part in the report which is focusing on interoperability of AI governance and I wanted to turn to Professor Xing Li who has worked very closely on internet governance and Professor Li wanted to understand what can we learn from internet governance to inform AI governance when it comes to interoperability.

Xing Li:
Okay, thank you very much for inviting me in this panel and Professor Xing Li from China, from Tsinghua University and actually 30 years ago, China connected to internet and we tried to participate and get into the different level of management or governance and at a higher level actually, the government should need to permit this kind of access and from technical layer actually, a lot of things. For internet, if we took look after evolution of the internet technology, there is IETF, Internet Engineering Task Force. That’s exactly for the technical interoperability works and engineers as individual work on that. So and then there are some other things, for example, the number assignment which is regional internet registries and the names that’s ICANN and a couple years ago, there is IANA transition so that make it from US centric to the global playground. So actually now it’s chart GPT and AI and I’m actually very excited on that and I believe Generative AI is something maybe even bigger than TCP IP. However Take a look of this area. We don’t have IETF. We don’t have this kind of organization So maybe that’s something we should take a look at that and work on that So and another thing actually I feel for internet from original technology and the evolute and the invention of the WWW and other technologies and the people try to understand things and there is no blueprints for that for generative AI I have a feeling probably the regulation Get into too early. We need to have innovation space at least for academics and the technical group we have some innovation space. Otherwise, it’s very difficult to move forward and Inside the country probably is okay for create a innovation place Actually, I really want global as a global village academics can work together and to make things more exciting. Thank you very much

Moderator – Prateek:
Thank you, sir Touching a briefly upon some of the the ways of governing internet that you mentioned this report actually Also talks about under the interoperability dimensions three things interoperability at the level of substantive tools Guidelines norms and so on then interoperability at the level of mechanisms for multi-stakeholder engagement And finally also talks about agreed ways of communication which really talks about agree agreeing on definitions concepts semantic interoperability Jose I wanted to turn to you and When you read the report, what were some of the key aspects around, say, the recommendations around interoperability that struck out for you, and what do you think about that?

Jose:
Hello, well, thank you very much, Pratik. It’s a great pleasure to be here. Looking at the report, one thing that I identified is that I think that we need to, considering that it is a report made by Global South representatives for the Global South, I think that one thing that we need to advance is in understanding what are the movements that we have in our own region, and within this, understand exactly what are the policies that we are driving forward, what are the narratives that we are pushing forward. And I think that when we look into regulation, it is, I think that it is a great thing that we are not focusing just on what’s going on in the EU and other countries which, and regions, bloc countries which are leading this debate, but also to those within the Global South. And I think that the main thing that we need to advance is understanding what are the points that are missing in the discussion, and I think that the report touches upon some of these issues, but I think that, especially when we look at our region, the main thing is that we need to understand what are specific challenges that we have, and I would like to mention, for instance, issues related to labor. We are not touching upon yet the impacts that the development of these systems, that the industry, the tech industry as a whole is having on labor, and I’m not talking about just what will be the future of work, let’s say, what people working now in offices will need in the next few years so that they are not left behind, let’s say, in this. in the advancement of, after the advancement of these technologies, but also what is happening with those so-called gig workers. In Brazil, at least, I can say that there is an intricate link between what’s happening with the people working in delivery platforms and issues related to race, and this relates also to the survival, let’s say. In Brazil, the increase in the deaths by drivers, motorcycle drivers, has increased in, if I’m not mistaken, it was like 80% in the last 10 years, and one of the reasons that many scholars have been debating is the fact that we have new platforms that demand other kinds of times of delivery, which pressure these workers in an extremely different way. So I think that this is an issue that we need to tackle, especially in our region, and also I think that we need to go deeper in the debates regarding sustainability. We’re gonna, the report is just upon this theme, and I think that it advances a lot in issues like tackling techno-solutionism, techno-optimism, because this discussion goal was against, not to say merely, but the strict issue on energy consumption, greenhouse gas emissions, and et cetera. It is upon politics. We have seen, for instance, some leaders, let’s say CEOs of tech companies, say, talking about the issues in Bolivia as that happened with, after President Evo Morales was out, and supposedly interests regarding the minerals that Bolivia has, especially lithium. So I think that we also need to advance this. And maybe one last point that I would touch upon in this question so I don’t take more time is the debate on biometric surveillance or the use of biometric systems. especially for surveillance purposes within and in the borders of countries is another issue that we need to take seriously. And considering issues related to, for instance, talking about Brazil once again, the structural racism that pervades, that passes through the criminal system is just being automated and accelerated with the development of these technologies. And a tech fix on them won’t solve it. We need to start thinking seriously about whether we are going to establish moratoriums for the systems or especially banning, which is one agenda that we are having very strongly in Brazil that I think that we could talk about it in the next report, how civil society is pushing forward for the banning of the systems over there. And yeah, I would say that it is one of the main issues that we are currently debating there. Thank you.

Moderator – Prateek:
Thanks, Jose. So I think you put it three points, right? There is the data, which is coming a lot from the global south. There’s the workers who are working on that data. And then there is the natural resources. And these three elements also come out quite strongly in the report or the case studies that we have seen in the report. And important for global discourse. Now I would like to turn to you, Mr. Jean-Francois. You read the report and you’ve seen some of the key challenges that are being mentioned. One of the things that the report talks about is also capacity building. And how do you see, how can we strengthen capacities in the global south, for instance, for engaging? first with processes, multi-stakeholder processes on governance, but also on the development use of AI. Over to you.

Jean Francois ODJEBA BONBHEL:
Okay, thank you so much. My name is Jean-Francois Bomben. I come from Congo, Brazzaville, and I’m AI and emerging technologies expert in regulatory. I’m working with RPC, which is the authority of regulatory in Congo, and we are expecting many things from AI and generative AI, but also fear about what is inside the box, you know? AI seem like a black box with many things inside, and so we are pushing that by three points. So the first one is benefits versus risk, and the second one is about accountability with the controller, and the last one is about education. I mean, education is a big part in our strategy. We created a school specialized in AI, from elementary to graduate one in Congo to make sure that our kids to educate population in general and make sure that everyone can access to the technologies and to know what is coming in, and all innovation can change life and bring developments. So make sure that no one will be keep outside. of that technology, that’s all.

Moderator – Prateek:
Thank you so much. I would now turn a bit to the second section of the report, which is really focusing on gender and race. But before that, I would also let the participants here know I’ll open the floor in about five minutes for questions, so feel free to, if you have something in mind as well. So the report cites the UN Human Rights Council, which said technology is a product of society, its values, and its priorities, and even its inequities, including those related to racism and intolerance. Sorayu, I wanted to turn to you and understand, do you have examples of gender or racial biases in AI systems that have impacted individuals or communities, and at the same time, if you can also give another example where, which also, the report also talks about some of those, where AI systems have been used to actually combat gender bias that we have in society. So, over to you.

Sarayu Natarajan:
Thank you for that question. I think it’s a broad and difficult one, and I’ll try my best to do as much as I can. I mean, before we jump into the question of gender bias and gender in AI systems, and along with gender, other forms of biases, such as racial, language, et cetera, do creep into AI systems, and it’s hard to talk about them in aggregates, because they do have their own specific politics. But having said that, there are some commonalities in these forms of intersections. But if you want to pick one, go with one and go with the specifics. Sure, sure, thank you. Okay, so the, before delving into the question of bias itself, I think it’s important to tackle. very briefly the forms of injustice that generative AI systems might have. One of course is the notion of data, data injustice that emerges in the context of gender, race, language, et cetera. And I’ll probably tackle language. There is also the injustice of labor and Jose here did refer to it. But to imagine generative AI without talking about the labor of annotators who make generative AI in the sense that they label data, they annotate data, they categorize data in ways that are accessible to researchers and scholars and builders of AI is very critical. Rather than delving into specific examples of language or gender bias that AI systems perpetuate, let’s talk about generative AI and the labor of generative AI. A lot of this labor is done in the global south. So millions of workers through various forms, platforms, sometimes in the form of large contracted organizations, work on data sets and labeling data sets and this is applicable to generative AI and several other forms of AI. Now in order to label, and let’s say you’re labeling a car or a bus or a vehicle or language or gender or race, the categories within which you label are often created in the west, which is that the company that’s getting AI made is the one that’s asking you to label, I don’t know, like a llama or a cow, objects which are often unfamiliar to the people that are labeling. So the origin of bias in a certain way is of course the larger politics of how AI is made, but it also is mediated by very, very different but it also is mediated by very, very specific practices. Around language, around even English language, language as being an input into large language models. So I think in order to talk about bias, it’s important to talk about labor, labor supply chains, the way in which AI itself is made, the way in which labeling and labeling categories are created. Jumping into how AI might enable or mitigate bias, I think there are several examples, but one specific example, and rather much more a concerted effort that has happened over time is in the Indian context that our efforts to develop large language models in non-mainstream languages. Several of these efforts, fortunately or unfortunately, have been spearheaded by small organizations who work in specific communities. And these efforts might make some of the benefits of generative AI accessible to wider communities in the languages that they speak. So I’ll pause here and hand back to you.

Moderator – Prateek:
Thanks, and I’ll add to that. There are some questions online. I’ll add to that also that in, for instance, in Africa, there’s a research group working on low resource African languages called the Masakhane community. So if anyone is interested to work with them or join them or support them, please do check out, they’re doing some fantastic work to create data sets in African languages as well. I would like to also turn to folks online. I don’t see you here, but if there are some questions from our online moderator, Shamira. Yeah, if Shamira, you can pick one or two questions online.

Shamira Ahmed:
Yes, sure. I will go to the first question we got. Thank you, Pratik. Can you hear me?

Moderator – Prateek:
Yes, very well.

Shamira Ahmed:
So the first question we got, I’m just going there quickly. The first question we got. was what, from Prince Andrew Livingston, what international collaborations and agreements are needed to govern AI on a global scale?

Moderator – Prateek:
OK, thank you. Can we collect a few questions?

Shamira Ahmed:
Yeah. And then the next question, I’m not sure if virtual attendees can raise their hand and pose their questions directly to you as well. Let’s see if there’s a question in the chat, and then we come back for people who may want to take the floor later. OK. And the next question was from Ayalo Shebeshi. And they had two questions. The first one was, how can we approach both the negative and positive impacts of AI, especially in global South and developing countries that are replacing human jobs? And then the next question was, how can we manage standards and international regulation of AI initiated by international bodies, as part of the UN and other agencies, and make sure that there is full agreement by all countries and nations?

Moderator – Prateek:
Thanks. Thanks, Shamira. So we have three questions. Two are quite similar. But I’d request you to hold on a bit, because I want to collect at least two questions from the room as well. And then we address everything at the same time. Anyone in the room would like to take the floor, please? Yes, please, our colleague from the UN University.

Audience:
Good morning. Good morning. Jingbo from UN University. Actually, this is much more intimate so we can communicate. So my question is related to capacity building. We know that AI is less to do with engineering science and more to do with empirical science, which means AI is not like an engineering product that you design and you know what’s gonna happen. Even the researchers, even the designer don’t necessarily know what’s gonna happen. So along the way, they have to test, experiment to find out what are the risks, what are the potential benefits extra. So my question is related to the difficulty in capacity building because things keep changing. Even the designer don’t know where to go and meanwhile, there’s misinformation, there’s different sources of information and how do we build capacity? How do we teach, for example, the school children or even our peers, my grandmother, for example. How do we let them know, inform them of what’s going on? Thank you.

Moderator – Prateek:
Thank you so much. We have our colleague from EY. Sir.

Audience:
Hi, Ansgar Kuna from EY. In AI, as with a number of these digital technologies that are arising, we’re seeing a blurring between the lines of sort of the more political space of regulatory development and the technical space of standards development. Standards are becoming increasingly an instrument in the implementation side of the regulations. And so my question is around the capacity building of enabling a wider community of stakeholders to engage in that sort of technical side that has become an important part of the bigger policy instruments.

Moderator – Prateek:
Thank you so much. So we have five questions. I will not go to each one of you to answer because that will take us ages. Who wants to take the questions around governance and the governance framework? So there’s one set around governance and how to make AI governance globally works. So I see Oven wants that. And then there’s one set around capacity building, both at the technical level, but also going to schools and so on. And so first we go to Oven then. Yeah, over to you.

Owen Larter:
Sounds good, thank you. So I think this is a really important question to ask. How do we build a coherent global governance framework for AI? And I think it’s important to realize that there is a difference between having a sort of globally coherent framework and having identical regulation in every single country. I don’t think we wanna get to the latter. I think what we want to have is a set of principles, probably a code of conduct that sets a high bar globally, but then allows individual countries to take those standards and implement them in a way that makes sense for them. I really think on this global governance conversation, we’ve made an enormous amount of progress over the last year. We’re coming up actually to quite a significant milestone. On the 30th of November, 2022, you had the launch of ChatGPT, which really did change the conversation, it seems, amongst the public and amongst lawmakers around the use of these technologies and their impacts on society. I think the progress that has been made is really quite significant. I think you’ve seen that this week in the types of conversations that we’re having here. Very importantly, you see that in the reports that has been put together. I think the G7 code of conduct through the Hiroshima process under the Japanese leadership, very, very important in terms of advancing this global conversation around how to develop and use AI. I do think you’ve sort of got the building blocks in place now for a sort of longer term conversation around what global governance should look like. I think as we have that conversation, we should sort of take a step back and think about a couple of. things. The first is what ultimately do we want a global governance framework to do? And secondly, what can we learn from existing global governance regimes? And I think there’s probably at least three things that we want this framework of the future to do. The first is around standards setting. I think standards are going to be really important in terms of advancing this coherent global regime. I think there are great lessons to be drawn from organizations like ICAO, the International Civil Aviation Organization, part of the UN family, where you have a really broad global representative conversation with pretty much every country in the world participating in it to set global safety and security standards that are then implemented by domestic government. So I think that kind of standard setting piece is really important. I also think having a conversation, this was addressed in the report as well, around advancing an understanding and consensus around risks. I think this is a really important piece of the global discussion. You can look at organizations like the Intergovernmental Panel on Climate Change, for example, again, part of the UN family that I think has done a really good job of advancing an evidence-based understanding of the risks around climate. I think we should be looking to do something similar when it comes to AI. Then maybe the final piece that I’ll mention, and this sort of bleeds over into the capacity building conversation, is around building out infrastructure. So this technology is moving so quickly, it’s easy to forget sometimes that it is still relatively new. So the transformer architecture that underpins these large language models that are causing a lot of excitement and enthusiasm at the moment is only six years old, developed in 2017. There are an enormous amount of open research questions that we really need to continue to invest in and tackle. We need to provide the infrastructure to academics and researchers to be able to do that. So there are interesting proposals in the US, for example, for something called the National AI Research Resource. This is an idea of developing publicly available compute data and model that academics would be able to use to study these technologies more and advance our understanding around them. So, technology investment in the infrastructure. One piece I would really emphasize there is the importance of developing evaluations for these technologies. It’s a very difficult space with a lot of open gaps at the moment. We need to make some progress there. Then the final point I’ll make, and then I’ll stop talking, is around the social infrastructure as well. So, we need to be able to find sustained ways of having global conversations, building on, I think, the great progress that we’ve made this year and conversations like this, to have a really globally representative discussion around these issues that allows us to, quite frankly, monitor how the technology goes, keep track of things as technology progresses, and be able to adjust and be nimble with how we’re approaching these things as a global community.

Moderator – Prateek:
Thanks, Sobhan. I saw that, Jose, you wanted to comment also on the governance part, and then Nobu, I’ll turn quickly to you as well on how you see the global governance landscape evolving as well.

Jose:
Thank you, Pratik. When we discuss the global governance of AI, I have to admit that, at the moment in which we are, I am quite skeptical that we are, at this moment, going to advance something in this regard, in a way that does not mean a race to the bottom regarding what are the parameters that we have to govern these technologies and their impact. I mean this because we have an interplay of many narratives going on, which lead many countries, and like this, I’m talking about geopolitics, the geopolitics of these technologies and issues related to competition, to the supposedly AI race that exists, and which has been framed as something quite similar to what we had during the Cold War. So, I think that this is a huge barrier for us to overcome. if we want to have a reasonable sort of global AI governance regulation or whatever way we can frame this. And I think that especially if we consider the countries from our region, and I’m talking about the majority world, global south, I think that there are forum, there are some forum that we need to push forward this agenda in order to have our interests in play. And I’m talking about the BRICS, G20 to some degree as we have the presence of many countries from Latin America, from the African continent, from South Asia and et cetera. And this means a pressure on the global north because they are the ones who have developed these technologies who are pushing them forward and who are having their, especially their companies dictate the agenda of what is a tech worth of our attention or not. But if I were to pinpoint two points, and now I’m gonna make reference once again to the issues of labor and of the extraction of natural resources as it’s commonly called. I think that first of all, maybe I’ll tell you a story to illustrate this. In the beginning of the year, there was a genocide in the Brazilian Amazon of a specific ethnicity called the Anamames. And these people was being killed and had their territories invaded by gold miners, illegal gold miners. And afterwards, after a while, our federal police identified that one of the companies which was dealing with these illegal gold miners was selling gold to companies like Amazon, Apple, Google, and Microsoft. And so this is one thing, we need to deal with the issue. related to the expression of these resources and the impacts that they have in groups, which, and of course, when we’re talking about global governance of technology, we’re saying also that there are groups that seem to matter more than others. And I’d say that the anomalies in this case seem to be in the side of the ones that are less worried about. And so, in this point, I think that we need to start seriously thinking on how to deal with the materials that we are doing, with the lives that we’re impacting. And here, once again, and I think that the point related to also the clique workers and the ones who are helping develop these technologies who come from the global south, I think this is also a necessary discussion that we need to have in a global governance, be it through the control by workers of algorithms and of the decisions being made by these companies, or, and of course, better working conditions for them, and having the due responsibility of the companies who are in the higher edge of the chain to be responsible for what’s going on in these situations.

Moderator – Prateek:
Thanks, Jose. So, in a way, the point that Erwin was also making around accountability across the value chain and evaluations and so on should be part of the global governance frameworks and these evidence-based processes that are being talked about. Nobu, coming from a government, where do you see is the global governance of AI going? And what is your perspective on that?

Nobuo Nishigata:
Let me say that from the global governance, of course, that’s one single country’s government, and of course, we are looking at what is taking place in the global sphere, like here, OECD, UN, of course, and UNESCO, et cetera, or ITU. Then, like, to me… as a government person, I mean, I cannot write a code, honestly, but on the other hand, I can write the legislation in Japanese. This is my job, right? So then, you know, like we want to have some room to have our own, like, you know, for example, once we have the treaty in the top, of course, we respect the treaty, we sign it, and then we have to do something to be aligned with the treaty, right? So maybe, like, for this case, I mean, it’s, I’m not sure, it’s maybe too early, you know, I recognize that some, particularly the Council of Europe people are for, you know, working hard to get a framework treaty convention, and I was in some negotiation in the same way, but, so once we, for example, once we got some treaty on AI, but we don’t want to have the very strict treaty, because it’s a moving target. I mean, when it comes to the, like, a human right, or like, maybe for the war, etc., then it could be that way, but on the other hand, for this case of AI, then we don’t want to have the very strict upper hand, and then so that we don’t have any room to do our own, I mean, it’s not only for Japan, I would say it’s for every government, and every government workers out there. Then, so then, they’re getting some growth, so from the point of the OECD, then, I mean, I’m not studying the OECD anymore, because I graduated from there, but still, I mean, their principles are very simple, like, five value-based principles, like, you know, like some people said about accountability, yes, there is one, and explainability, and safety, security, robustness, those kind of things, and then this comes first, I mean, you know, the advanced AI systems dealing with the human, then we need a safety, right? Then, of course, the principle touches on the privacy, and fairness, and human rights issues, and then that’s it. Then, on the other hand, we have five more principles, but it’s more like a guidance to the government, like, you know, government has to work on the ecosystem, the government has to work on some skills or capacity building, or etc, like a regulation if needed, or maybe creating some testbed to facilitate their own work in every place in the world. And of course, in the end, as we see this perspective, then we want to have more collaborations between the countries, so then the last principle is about talking about international collaboration, both in the policies and the techniques and standards, etc. So, I mean, you know, there are a bunch of different international organizations, and each organization has a different membership, different mandate, etc. So, it should be there, you know, it’s natural to have the very various type of for example, recommendation principles, guidelines, etc. But still, I mean, the bottom line is not very different, I would say. I mean, just the thing that OECD was about only the first one, but still, I mean, you know, I don’t see that there are too many different things. I mean, you know, it’s more like a version of each organization, and they have to do it, because, you know, each organization is a different body.

Moderator – Prateek:
I can say that all the international organizations working on this are mostly coordinated, so, I mean, from UNESCO, we do work with the OECD, with the Council of Europe, the African Union, with the European Commission, to at least exchange where the work is going, because at the end of the day…

Nobuo Nishigata:
I mean, we can share the episode of the Pratik and I, I used to have the lunch over the river of the Seine in Paris, right?

Moderator – Prateek:
Exactly. So thanks for that. I want to turn now to the second kind of set of questions that we had around capacity building. So I’ll turn first to to Mikey, then to Sorayu, Professor Ching-Li, and to Jean-Francois. What is it that we need to strengthen capacities across different levels and for different things from development of technical standards to development of governance to just using AI in our daily life to detecting this information. We had a wide variety of capacities that were mentioned. So maybe each one of you can pick on some. Over to you, Mikey.

Maikki Sipinen:
So the audience questions were about capacity building as well as how might be enabled a wider community to take part in this AI dialogues and debates. And from the way I see it, they are sort of parts of one and same. So of course, we need to improve our efforts in introducing AI and data governance topics in schools and universities and training citizens and the labor force, at least in basics of AI. And there are many amazing initiatives to be found everywhere in the world. For example, in Finland, where I’m from, I think Finnish AI strategy managed to train more than 2% of the Finnish population in basics of AI just in under one year. So that’s a good benchmark on what’s possible if there is a well. And something else I’d like to highlight here is the capacity building of civil servants and policy makers since this is an area that would really deserve and require even more space in the AI governance discussion. I like to know what just the moment that gets said, like I can’t write the. I can write the regulation for Japan and this is exactly what we should all understand and appreciate that we need different kinds of AI expertise to come in and work together so that we can make this global AI governance happen so that it’s inclusive and fair for us. Maybe you already guessed that capacity building is my personal favorite topic under AI and earlier this spring we were kind of brainstorming and discussing with the P&AI community like which topics to select for a report because it’s quite obvious that not not all AI and data governance related thing can be included and covered in one report and I was kind of secretly hoping that someone would suggest this capacity building and I was a bit bummed when that didn’t happen, but over the past month I realized that this is sort of naturally interwoven in all of our report topics as well as for government. All the groups in the end navigated towards capacity building and included some recommendations or sentences of that. It’s really in the core of all our topics and I trust that in the coming years we will have more focus on capacity building global dialogues as well.

Moderator – Prateek:
Thanks, Maiki. Sorayu, you would like to take that?

Sarayu Natarajan:
Thank you, you’re absolutely right. There are multiple categories of capacity building, multiple groups that need to engage with AI technology, different types of AI technology in different contexts. So you’re absolutely right. right, there’s the ability of the population, citizens at large, to engage with AI and get the best out of it in a certain way, or at least not be harmed by it. Then there is the question of how do technical communities from different domains, the legal, policy, governance community on the one hand, the technical community on the other, how do they talk to each other? And maybe I’ll focus on that very briefly. I mean, my starting point on capacity building, adult education, whatever you want to call it, is the idea of mutuality, which is that both of these disciplines, both of these sort of empirical starting points need to be able to talk to each other in a meaningful way. Just as much as I have benefited from learning about embedding in large language models, I do think technical communities would benefit from understanding, for example, the politics of category creation. Given the empirical emphasis of gender to AI. Understanding non-negotiable human rights, the role of state vis-a-vis citizen rights. So having a sense of these is a mutual expectation and a mutual process. And I think that having various fora that enable these in a non-judgmental way, in a recognition of various empirical starting points is critical. The other, I thought I heard a question on the gains and harms of AI, including specifically on job loss. And right away, or? Please feel free to take that as well. Right, thank you. I think there was a question on some of the gains and harms of AI, and specifically focused, if I understood it correctly, on job loss. I do think it’s a genuine challenge, particularly from some forms of generative AI. I do think as a society, we are still starting to gather the evidence and understand how different forms of, different applications of generative AI might cause different. different consequences, particularly around jobs. The legal community, the tech community, the coding community particularly, are likely to be affected by the easy availability of the capabilities of generative AI. That is understood, but I think the degree and extent is to be better imagined. There is an ILO report that does say that the impacts of job loss are more likely to be felt in the global north. Consequently, the global south will actually gain from very specific types of jobs that generative AI will generate. And I think we’ll have to be careful and observe this a little bit more. It shouldn’t be that, in a certain way, the capacities of generative AI ends up further ossifying the barriers and the types of jobs that exist in different parts of the world, that it’s not some forms of click work alone that remain, and then some of the skilled jobs, particularly ones that relate to category creation, we want to go back to that point, remain with existing powers. So I think we have to keep watching this job loss question. And then, of course, humane conditions, just conditions for workers in different parts of the world. So I’ll pause there.

Moderator – Prateek:
Thanks. Professor Xing Li.

Xing Li:
Oh, okay. Capability beauty is also my favorite topic. Actually, people, I believe this creates, generic AI create opportunities, but also the challenges to the global south. And the people refer genetic AI to three factors, usually that’s the algorithm, the computing power, and the data. And actually, I would like to add another thing that’s more important, that’s education. That’s very, very important, that’s the human resource. And the traditional education sometimes need to be changed. I believe in this AI age, yes. Four things very important, the first is critical thinking. In the old days, student just follow what teacher said. However, if it’s a rabbit or AI, then you have to have the ability for critical thinking, that’s first thing. Second, everything should based on fact. Third, logical thinking. And the finally, but also very important, global collaboration. The people need, okay, the youngsters need to have ability in these four areas. I believe it’s important. So I really like to see the global AI related education system. That may be as many years ago, I mean hundreds year ago, that’s creation of the modern university. We need some kind of new educational systems in the AI age. One of the lady professor from Stanford University, Fei-Fei Li said, we need Newton and Einstein in AI age. Thank you.

Moderator – Prateek:
Thank you, sir. A plug for UNESCO colleagues working on education and AI. They’ve launched some guidelines on generative AI and education. So if you’re interested, feel free to check that out as well. I’ll move to Jean-Francois.

Jean Francois ODJEBA BONBHEL:
Okay, thank you. So I will switch to French, I’m more proficient in French. So I will speak to you in French. I would switch to French because I want to make sure that I can express myself correctly, my thoughts. In terms of this capability, skill sets, I think this is overall, and it is global. I think that this is something that we can implement, that we could put in place different training sessions, and then everybody would be on the same footing. And there are various perspectives which we could put forward as a teacher, as an educator, and also as a software developer. We have devised different processes for AI. I work with computers and I work specifically on governance. So what I do see, I understand the world that I live in, that I work in. I am working on a world, I am devising that world where my children are going to be living. I work with researchers, developers, and I also take a step back and I ask myself a question. Is it actually that world, that specific world that I want to think about, that I want to create design for my children, for them to be able to live there? To make sure, how am I going to be certain of that? Now with regard to capacity building, we have implemented a program specifically designed for kids, for children which work with technology at the age of 6 until the age of 17. And we thought about numerous questions and ideas. We said to ourselves, what do we want? Do we want these youngsters to become experts? Do we want all of them to become experts in technology? Or simply, are we going to be preparing them, are we going to be endowing them with the necessary skill sets for the life, for the world where they are going to be living in? Now with regard to the solutions that we are going to be providing for them, we want to have a multi-faceted solution for them. What is the environment that they are going to be living? Is it the teacher or the parent? What are the options that we are going to be bringing about? Of course, while there are sanctions or punishments of course, if you don’t learn this or that, you’re going to be punished. Now our children and us, ourselves, we live in a world where these solutions, there are multiple solutions. We are developing their cognitive skill sets so that they have options, they have solutions so that they can have different ways of resolving all of these problems. There are multiple solutions. So this is a facet in developing or the skill building that we are talking about here today. And it is part of it. It is part and parcel. We need to be able to equip our children with the necessary skill sets of AI that is aligned with numerous and multiple solutions. And this is the focus that we have been developing, devising and working together. I feel that this is the appropriate approach within the educational school environment.

Moderator – Prateek:
On the final segment, and I see we have only 10 minutes left. So we have Shamira, our colleague online, who has worked on the environmental aspects of the report. Shamira, would you like to share some of the key insights from the discussions that you’ve had around environment and data governance and some of the case studies that you’ve presented in the report?

Shamira Ahmed:
Sure. AI and the environment and data governance is quite broad. But because the focus of the report was on the data governance aspects, we focused on the data governance aspects at the nexus of AI and the environment. And in summary, our recommendations collectively offer a multi-stakeholder perspective from the global south. And as mentioned by the other speakers, to promote interoperable AI governance innovations that harness the potential of AI, we focus on the multi-dimensional aspects of AI. of data for sustainable digital development. And sustainable digital development is basically a way forward of leveraging digital technologies that also considers environmental aspects, economic aspects, and also societal aspects in one comprehensive Venn diagram, let’s say. And we also discussed addressing historical injustices. We advocated for a decolonial informed approach to the geopolitical power dynamic that some people have mentioned in, for example, the materiality of AI when we consider the value chains and of the materials that go into AI. When we consider the multi-stakeholder process, is it really representative? And are the standards made in consideration of innovation ecosystems or global self-institutional mechanisms and situations? So we also talked about inclusion and minimizing environmental harms as many of the other speakers have highlighted as well. So in summary, we highlighted that a just green digital transition is vital for achieving a sustainable and equitable future and the way that leverages AI in order to drive responsible practices for the environment that promotes economic growth, social inclusion, and essentially provide a pathway toward a more resilient and sustainable world that actually meets the contextual realities of the global South. Most of the panelists have summarized and captured the report quite succinctly. And as Mikey mentioned, we mentioned capacity building. We talked about geopolitics. We talk about. the environmental aspects of AI. We talked about the data governance aspects. We talked about interoperability and defining key terms. So I think it’s a comprehensive report and I learned a lot during the making of the writing report and it was a truly bottom-up multi-stakeholder process. Thank you.

Moderator – Prateek:
Thank you, Shamira, for sharing that summary and also some of the important work that you’ve been doing on environment. I would now really open the floor here, not for questions, but for any recommendations that we have from the audience here on what could be other issues that this group would explore and this is also an invitation to you to join. This is a multi-stakeholder policy network to join this group. So are there any folks who would like to share any recommendations or thoughts? Yes, sir.

Audience:
Oh, hi. My name’s Yeo Lee from World Digital Technology Academy. We do research, also training and material for education on AI. And Pradeep, you mentioned UNESCO will do more for AI training and education. Right now, for example, for our published book and textbook, we provide it to a lot of universities, particularly in China, developing countries, global thoughts. So in the future, UNESCO will have any process for us to contribute or will you do training just yourself?

Moderator – Prateek:
I will definitely come back to you on that bilaterally because the session is not on UNESCO, it’s on. on the Policymakers Network, but happy to share what we are doing and link you with colleagues in the education work, for sure. Thank you. Anyone else who would like to share any recommendations? And perhaps if there’s someone online as well, Shamira.

Shamira Ahmed:
Yes, there is someone online.

Moderator – Prateek:
Okay. So if there’s someone who would like to take the floor online, I believe.

Shamira Ahmed:
Yes, I think the host should give rights and the person who’s raised their hand should put their video on.

Moderator – Prateek:
Okay, while we wait, we can go to the floor here.

Audience:
Hi, Ansgar Kuhne from EY. I think an important aspect is going to be the question about how we do the assessment to test whether the systems are achieving what we want them to be able to achieve. And specifically the question that is often raised is how do we assess the non-strictly technical aspects around the performance, such as the challenges around how the system is actually operating in the context of places where they haven’t been built and whether they are having unintended consequences in that kind of context on, for instance, the workers in these environments or the way in which people are being categorized through these systems. So thinking through the assessment assurance process of how these systems are operating, especially on those non-technical properties of the systems.

Moderator – Prateek:
Thank you so much. So I’ll now turn back to the panelists for three keywords which are future oriented and what this group should look at. You’ll see the screens, we have four minutes, so you have only three keywords each. Maybe we start with Nobu on the other side.

Nobuo Nishigata:
Three keywords, maybe we have to continue about the picture for this forum, the global south, education, and then maybe harmonization.

Moderator – Prateek:
Thank you. Mikey.

Maikki Sipinen:
I choose the keywords inclusive, future, and ENAI.

Moderator – Prateek:
Thanks. Joseph. Oh, they’ve got, I would say, let’s keep up with the initiative, let’s include other things, let’s go further in the ones that we have already debated, and this kind of initiative in this forum is fundamental, and thanks for the IGF for that, I guess it was in this opportunity that this was all possible. Thank you. Soraya.

Sarayu Natarajan:
Global, not global south, workers, and rights.

Moderator – Prateek:
Thanks. Professor Shingley.

Xing Li:
Critical thinking and global collaboration. And global? Collaboration. Collaboration.

Moderator – Prateek:
Owen.

Owen Larter:
Three thoughts, I guess. One, sort of get concrete on capacity building and what we can be doing to drive things forward. Two, invest in evaluations, invest in evaluations, invest in evaluations, it’s a major gap. And across all of this, continue to bring together technical audiences with non-technical people that understand the socio-technical challenges of these systems as well.

Moderator – Prateek:
Thanks. Jean-Francois.

Jean Francois ODJEBA BONBHEL:
I would say innovation, education, and accountability.

Moderator – Prateek:
Thank you so much to our panelists here for your insightful thoughts, to the participants both online and in person here. We invite you to look at the report, which was, as again, mentioned before, developed in a multi-stakeholder manner. It’s available on the website of the IGF under Policy Network on Artificial Intelligence. This work will continue and you are invited to join and expand this community going forward. Thank you so much and have a good day. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

148 words per minute

Speech length

535 words

Speech time

217 secs

Jean Francois ODJEBA BONBHEL

Speech speed

149 words per minute

Speech length

783 words

Speech time

314 secs

Jose

Speech speed

172 words per minute

Speech length

1487 words

Speech time

518 secs

Maikki Sipinen

Speech speed

148 words per minute

Speech length

772 words

Speech time

313 secs

Moderator – Prateek

Speech speed

164 words per minute

Speech length

2907 words

Speech time

1061 secs

Nobuo Nishigata

Speech speed

167 words per minute

Speech length

1846 words

Speech time

664 secs

Owen Larter

Speech speed

211 words per minute

Speech length

1808 words

Speech time

514 secs

Sarayu Natarajan

Speech speed

177 words per minute

Speech length

1772 words

Speech time

602 secs

Shamira Ahmed

Speech speed

129 words per minute

Speech length

640 words

Speech time

299 secs

Xing Li

Speech speed

150 words per minute

Speech length

620 words

Speech time

248 secs

Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Yik Chan Chin

In thorough discussions concerning China’s data policy and the right to data access, correlated with Sustainable Development Goal 9 (Industry, Innovation, and Infrastructure) and Sustainable Development Goal 16 (Peace, Justice and Strong Institutions), China’s unique interpretation of data access has become a focal point. According to the analysis, the academic debate and national policy in China are primarily driven by an approach that interprets data as a type of property. This perspective divides rights associated with data into three fundamental components: access, processing, and exchange rights. It posits that these rights can be traded to generate value, as explicitly stated in the government’s policy documents.

However, this policy approach has sparked substantial critique for its disregard of other significant aspects of data access. Chinese policies predominately fail to recognise data’s inherent character as a public good. The academic sphere and governmental policy make scarce acknowledgement of this, undervaluing its potential contribution to societal advancement beyond merely commercial gains. Along these lines, the rights and benefits of individual citizens are often overlooked in favour of promoting enterprise-related interests.

The country’s data access policy is primarily designed to unlock potential commercial value, especially within enterprise data – an aspect contributing to the imbalance of power between individual users and corporations. Such power dynamics remain largely unaddressed in China’s data-related discussions and policy settings, potentially leading to a power imbalance detrimental to individuals.

Given these observations, the overall sentiment towards the Chinese data policy appears to be broadly negative. Acknowledging data’s essence as a public good and according importance to individual rights and power balances would be fundamental components for a more favourable policy formulation and discourse. The inclusion of these elements will ensure that the data policy reflects the principles of SDG 9 and SDG 16, aiming for a balance between enterprise development and individual rights.

Vagisha Srivastava

Web Public Key Infrastructure (WebPKI), an integral component of internet security, provides several benefits such as document digital signing, signature verification, and document encryption. This is epitomised by an incident involving a company named DigiNotar, which, through the misissuing of 500 certificates, compromised internet security – underlining the significance of digital certificates in web client authentication.

WebPKI governance intriguingly falls within the public goods paradigm. While the government traditionally delivers public goods and the commercial market handles private goods, in the case of WebPKI, private entities take noticeable strides in contributing; this defies conventional dynamics in the production of both public and private goods. That said, the government’s involvement isn’t entirely dispelled, with the US Federal PKI and Asian national Certification Authorities (CAs) actively partaking.

The claim that private entities are spearheading WebPKI security governance presents certain concerns. Governments may find themselves somewhat hamstrung when attempting to represent global public interest or generate global public goods in this complex context. As a result, platforms which are directly affected by an insecure web environment (such as browsers and operating systems) secure vital roles in security governance.

The Certificate Authority and Browser Forum, established in 2005, is crucial in coordinating WebPKI-related policies. This forum serves as a hub where root stores coordinate policies and garner feedback from CAs directly. In fact, its influence is such that it sets baseline requirements for CAs on issues like identity vetting and certificate content, since its inception.

Regarding the internal functionings of such organisations, the voting process within the consensus mechanism is diligently arranged prior to the actual voting process. Any formal language proposed for voting is already agreed upon, and the consensus mechanism is established pre-voting. Notably, there is curiosity surrounding how browsers, an integral part of the internet infrastructure, respond to such voting processes.

To conclude, internet security and governance system operate within a complex realm driven by both private and public actors. Entities like WebPKI and the Certificate Authority and Browser Forum play pivotal roles. The power dynamics and responsibilities between these players influence the continued evolution of policies related to internet security.

Kamesh Shekar

The in-depth analysis underscores the urgent necessity for a comprehensive, 360-degree, full-circle approach to the artificial intelligence (AI) lifecycle. This involves a principle-based ecosystem approach, ensuring nothing in the process is overlooked and emphasising a need for coverage that is as unbiased and complete as possible. The subsequent engagement of various stakeholders at each stage of the AI lifecycle, from inception and development through to the end-user application, is seen as pivotal in driving and maintaining the integrity of AI innovation.

The principles upon which this ecosystem approach is formed have been derived from a range of globally respected frameworks. These include guidelines from the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the European Union (EU), and notably, India’s G20 declaration. Taking these well-established and widely accepted frameworks on board strengthens the argument for thorough mapping principles for varied stakeholders in the AI arena.

The analysis also delves into the friction that can occur around the interpretation and application of said principles. Distinct differences are highlighted, for instance, in the context of AI facets such as the ‘human in the loop’, illustrating the different approaches stakeholders adopt at various lifecycle stages. This underscores the importance of operationalisation of principles at every step of the AI lifecycle, necessitating a concrete approach to implementation.

A key observation in the analysis is the central role the government plays in overseeing the implementation of the proposed framework. Whether examining domestic scenarios or international contexts, the study heavily emphasises the power and influence legislative bodies hold in implementing the suggested framework. This extends to recommending an international cooperation approach and recognising the potentially pivotal role India could play amidst the Global Partnership on AI (GPA).

The responsibility of utilising these systems responsibly does not rest solely with the developers of AI technologies. The end-users and impacted populations are also encouraged to take on the mantle of responsible users, a sentiment heavily emphasised in the paper. In this thread, the principles and operationalisation for responsible use are elucidated, urging a thoughtful and ethical application of AI technologies.

An essential observation in the analysis is the lifecycle referred to, which has been derived from and informed by both the National Institute of Standards and Technology (NIST) and OECD, with a handful of additional aspects added and validated within the paper. This perspective recognises and incorporates substantial work already performed in the domain whilst adding fresh insights and nuances.

As a concluding thought, the analysis recognises the depth and breadth of the topics covered, calling for further in-depth discussions. This highlights an open stance towards continuous dialogue and the potential for further exploration and debate, possible in more detailed, offline conversations. As such, this comprehensive and thorough analysis offers a wealth of insights and provides excellent food for thought for any stakeholder in the AI ecosystem.

Kazim Rizvi

The Dialogue, a reputed tech policy think-tank, has authored a comprehensive paper on the subject of responsible Artificial Intelligence (AI) in India. The researchers vehemently advocate for the need to integrate specific principles beyond the deployment stages, encompassing all facets of AI. These principles, they assert, should be embedded within the design and development processes, especially during the data collection and processing stages. Furthermore, they argue for the inclusion of these principles in both the deployment and usage stages of AI by all stakeholders and consumers.

In their study, the researchers acknowledge both the benefits and challenges brought about by AI. Notably, they commend the myriad ways AI has enhanced daily life and professional tasks. Simultaneously, they draw attention to the intrinsic issues linked with AI, specifically around data collection, data authenticity, and potential risks tied to the design and usage of AI technology.

They dispute the notion of stringent regulation of AI at the onset. Instead, the researchers propose a joint venture, where civil society, industry, and academia embark on a journey to understand the nuances of deploying AI responsibly. This approach would lead to the identification of challenges and the creation of potential solutions appropriate for an array of bodies, including governments, scholars, development organisations, multilateral organisations, and tech companies.

The researchers acknowledge the potential risks that accompany the constant evolution of AI. While they recall that AI has been in existence for several decades, the study emphasises that emerging technologies always have accompanying risks. As the usage of AI expands, the researchers recommend a cautious, steady monitoring of potential harms.

The researchers also advise a global outlook for understanding AI regulation. They posit that a general sense of regulation already exists internationally. What’s more, they suggest that as AI continues to grow and evolve, its regulatory framework must do the same.

In conclusion, the research advocates for a multi-pronged approach that recognises both the assets and potential dangers of AI, whilst promoting ongoing research and the development of regulations as AI technology progresses. The researchers present a balanced and forward-thinking strategy that could create a framework for AI that is responsible, safe, and of maximum benefit to all users.

Nanette Levinson

The analysis unearths the growing uncertainty and expected institutional alterations taking centre stage within the sphere of cyber governance. This is based on several significant indicators of institutional change that have come to the fore. Indicators include the noticeable absence of a concrete analogy or inconsistent isomorphic poles, a shift in legitimacy attributed to an idea, and the emergence of fresh organisational arrangements – these signify the dynamic structures and attitudes within the sector.

In a pioneering cross-disciplinary approach, the analysis has linked these indicators of institutional change to an environment of heightened uncertainty and turbulence, as evidenced from the longitudinal study of the Open-Ended Working Group.

An unprecedented shift within the United Nations’ cybersecurity narrative was also discerned. An ‘idea galaxy’ encapsulating concepts such as human rights, gender, sustainable development, non-state actors, and capacity building was prevalent in the discourse from 2019 through to 2021. However, an oppositional idea galaxy unveiled by Russia, China, Belarus, and a handful of other nations during the Open-Ended Working Group’s final substantive session in 2022, highlighted their commitment towards novel cybersecurity norms. The emergence of these opposing ideals gave rise to duelling ‘idea galaxies’, signalling a divergence in shared ideologies.

This conflict between the two ‘idea galaxies’ was managed within the Open-Ended Working Group via ‘footnote diplomacy.’ Herein, the Chair acknowledged both clusters in separate footnotes, paving the way for future exploration and dialogue, whilst adequately managing the current conflict.

Of significant note is how these shifts, underpinned by tumultuous events like the war in Ukraine, are catalysing potential institutional changes in cyber governance. These challenging times, underscored by clashing ideologies and external conflict, seem to herald the potential cessation of long-standing trajectories of internet governance involving non-state actors.

In conclusion, there is growing uncertainty surrounding the future of multi-stakeholder internet governance due to the ongoing conflict within these duelling idea galaxies. The intricate and comprehensive analysis paints a picture of the interconnectivity between global events, institutional changes, and evolving ideologies in shaping the future course of cyber governance. These indicate a potential turning point in the journey of cyber governance.

Audience

This discussion scrutinises the purpose and necessity of government-led mega constellations in the sphere of satellite communication. The principal argument displayed scepticism towards governments’ reasoning for setting up these constellations, with a primary focus on their significant role in internet fragmentation. Intriguingly, some governments have proposed limitations on the distribution of signals from non-domestic satellites within their territories. However, the motives behind this proposal were scrutinised, specifically questioning why a nation would require its own mega constellation if their field of interest and service was confined to their own territories.

Furthermore, the discourse touched on the subject of ethical implications within the domain of artificial intelligence (AI). It highlighted an often-overlooked aspect in the responsible use of AI—the end users. While developers and deployers frequently dominate this dialogue, the subtle yet pivotal role of end-users was underplayed. This is especially significant considering that generative AI is often steered by these very end-users.

Another facet of the AI argument was the lack of clarity and precision in articulating arguments. Participants underscored the use of ambiguous terminologies like ‘real-life harms’, ‘real-life decisions’, and ‘AI solutions’. The criticism delved into the intricacies of the AI lifecycle model, emphasising an unclear derivation and an inconsistent focus on AI deployers rather than a comprehensive approach including end-users. The model was deemed deficient in its considerations of the impacts on end-users in situations such as exclusion and false predictions.

However, the discussion was not solely encompassed by scepticism. An audience member provided a positive outlook, suggesting stringent regulations on emerging technologies like AI might stifle innovation and progress. Offering a historical analogy, they equated such regulations to those imposed on the printing press in 1452.

Throughout the discourse, themes consistently aligned with Sustainable Development Goal 9, thus underscoring the significance of industry, innovation, and infrastructure in our societies. This dialogue serves as a reflective examination, not just of these topics, but also of how they intertwine and impact one another. It accentuates the importance of addressing novel challenges and ethical considerations engendered by technological advances in satellite communication and AI.

Jamie Stewart

The rapid advancement of digital technologies and internet connectivity in Southeast Asia is driving the development of assorted regulatory instruments within the region, underwritten by extensive investment in surveillance capacities. This rapid expansion, however, is provoking ever-growing concerns over potential misuse against human rights defenders, stirring up a negative sentiment.

Emerging from the Office of the United Nations High Commissioner for Human Rights (OHCHR) is a report on cybersecurity in Southeast Asia, bringing attention to the potential usage of these legal legislations against human rights defenders. Concerns are heightening around the wider consensus striving to combat cybercrime. The general assembly has expressed particular apprehension leaning towards misuse, especially of provisions that relate to surveillance, search, and seizure.

What emerges starkly from the research is a disproportionate impact of cyber threats and online harassment on women. The power dynamics in cyberspace perpetuate those offline, leading to a targeted attack on female human rights defenders. This gender imbalance along with the augmented threat to cybersecurity raises concerns, aligning with Sustainable Development Goals (SDG) 5 (Gender Equality) and SDG 16 (Peace, Justice, and Strong Institutions).

The promotion of human-centric cybersecurity with a gendered perspective charters a course of positive sentiment. The protective drive is for people and human rights to be the core elements of cybersecurity. Recognition is thus given to the need for a gendered analysis, with research bolstered by collaborations with the UN Women Regional Data Centre in the Asia Pacific.

An in-depth exploration of this matter further uncovers a widespread range of threats, both on a personal and organisational level. This elucidates the sentiment that a human-centric approach to cybersecurity is indispensable. Both state and non-state actors are found to be contributing to these threats, often in a coordinated manner, with surveillance software-related incidents being particularly traceable.

Additionally, the misuse of regulations and laws against human rights defenders and journalists is an escalating worry, prompting agreement that such misuse is indeed occurring. This concern is extended to anti-terrorism and cybercrime laws, which could potentially be manipulated against those speaking out, potentially curbing freedom of speech.

On the issue of cybersecurity policies, while their existence is acknowledged, concerns about their application are raised. Questions emerge as to whether these policies are being used in a manner protective of human rights, indicating a substantial negative sentiment towards the current state of cybersecurity. In conclusion, although the progression of digital technologies has brought widespread benefits, they also demand a rigorous protection of human rights within the digital sphere, with a marked emphasis on challenging gender inequalities.

Moderator

Throughout the established GigaNet Academic Symposium, held at the Internet Governance Forums (IGFs) since 2006, a multitude of complex topics takes centre stage. This latest iteration featured four insightful presentations tackling diverse subjects ranging from digital rights and trust in the internet, to challenges caused by internet fragmentation and environmental impacts. The discourse centered predominantly on Sustainable Development Goals (SDGs) 4 (Quality Education) and 9 (Industry, Innovation, and Infrastructure).

In maintaining high academic standards, the Symposium employs a stringent selection process for the numerous abstracts submitted. This cycle saw roughly 59 to 60 submissions, of which only a limited few were selected. While this guarantees quality control, it simultaneously restrains the number of presentations and hampers diversity.

Key to this Symposium was the debate on China’s access to data, specifically, the transformative influence the internet and social media platforms have exerted on the data economy. This has subsequently precipitated governance challenges primarily revolving around the role digital social media platforms play in managing data access and distribution. The proposed model for public data in China involves conditional fee access, with data analyses disseminated instead of the original datasets.

One recurring theme in these discussions related to the state-led debate in China that posits data as marketable property rights. Stemming from government policies and the broader economic development agenda, this perspective on data has dramatically influenced Chinese academia. However, this focus has led to a significant imbalance in the data rights dialogue, with the rights of data enterprises frequently superseding those of individuals.

Environmental facets of ICT standards also commanded attention, underscoring the political and environmental rights encompassed within these standards. Moreover, the complexity of measuring the environmental impact of ICTs, which includes carbon footprint and energy consumption through to disposal, confirms the necessity of addressing the materiality of ICTs. The discussion further emphasised that governance queries relating to certificate authorities are crucial to understanding the security and sustainability of low-Earth orbit satellites, given the emergence of conflicts and connections between these areas.

Concluding the Symposium was an appreciative acknowledgement of the participants’ contributions, from submitting and reviewing abstracts to adjusting sleep schedules to participate. Transitioning to a second panel without a break, the Symposium shifted its focus towards cyber threats against women, responsible AI, and broader global internet governance. Suggestions for improvements in future sessions included clarifying and defining theoretical concepts more comprehensively, focusing empirical scopes more effectively, and emphasising the significance of consumers and end-users in cybersecurity and AI discourse. The Symposium, thus, offered a well-rounded exploration of multifaceted topics contributing to a deeper understanding of internet governance.

Berna Akcali Gur

Mega-satellite constellations are revising global power structures, signalling significant strategic transitions. Many powerful nations regard these endeavours, such as the proposed launch of 42,000 satellites by Starlink, 13,000 by Guowang, and 648 by OneWeb, as opportunities to solidify their space presence and exert additional control over essential global Internet infrastructure. These are deemed high-stakes strategic investments, indicating a new frontier in the satellite industry.

Furthermore, the rise of these mega constellations is met with substantial enthusiasm due to their impressive potential in bridging the existing gaps in the global digital divide. Through the superior broadband connectivity, vital for social, economic, and governmental functions, offered by these satellite constellations, along with their low latency and high bandwidth capabilities, fruitful benefits, such as optimising IoT, video conferencing, and video games, can be harvested.

However, concerns have been raised over the sustainable usage of the increasingly congested orbital space. Resources in space are finite, and the present traffic could result in threats such as collision cascading. Such a scenario could make orbits unusable, depriving future generations of the opportunity to utilise this vital space.

European Union’s stance on space policy, particularly the necessity of owning a mega constellation, demonstrates some contradictions. While a EU document maintains that owning a mega constellation isn’t essential for access, it is thought crucial from strategic and security perspectives, revealing a potentially contradictory standpoint within the Union.

Another issue is fragmentation in policy implementation due to diversification in government opinions, as demonstrated by the decoupling of 5G infrastructure where groups of nations have decided against utilising each other’s technology due to cybersecurity issues. With the rise in the concept of cyber sovereignty, governments are increasingly regarding mega constellations as sovereign infrastructure vital for their cybersecurity.

Lastly, data governance is a significant concern for countries intending to utilise mega constellations. These countries may require that constellations maintain ground stations within their territories, thereby exercising control over cross-border data transfers, a key aspect in the digital era.

In conclusion, the growth of mega-satellite constellations presents a complex issue, encompassing facets of international politics, digital equity, environmental sustainability, policy diversification, cyber sovereignty, and data governance. As countries continue to navigate these evolving landscapes, conscious regulation and implementation strategies will be integral in harnessing the potentials of this technology.

Kimberley Anastasio

The intersection between Information Communication Technologies (ICTs) and the environment is a pivotal issue that has been brought into focus by major global institutions. For the first time, the Internet Governance Forum highlighted this interconnectedness by setting the environment as a main thematic track in 2020. This decision evidences increasing international acknowledgment of the symbiosis between these two areas. This harmonisation aligns with two key Sustainable Development Goals (SDGs): SDG 9, Industry, Innovation and Infrastructure; and SDG 13, Climate Action, signifying a global endeavour to foster innovative solutions whilst advocating sustainable practices.

In pursuit of a more sustainable digital arena, organisations worldwide are directing efforts towards developing ‘greener’ internet protocols. Within this landscape, the deep-rooted role of technology in the communication field has driven an elevated demand for advanced and sustainable communication systems. This paints a picture of a powerful transition towards creating harmony between digital innovation and environmental stewardship.

Within ICTs, standardisation is another topic with international resonance. This critical process promotes uniformity across the sector, regulates behaviours, and ensures interoperability. Together, these benefits contribute to the formation of a more sustainable economic ecosystem. The International Telecommunications Union, a renowned authority within the industry, has upheld these eco-friendly values with over 140 standards pertaining to environmental protection. Concurrently, ongoing environmental debates by the Internet Engineering Task Force suggest a broader trend towards heightened environmental consciousness within the ICT sector.

The materiality and quantification of ICTs are identified as crucial facets to environmental sustainability. Measuring the environmental impact of ICTs, although challenging, is highlighted as vital. This attention underlines the physical presence of ICTs within the environment and their consequential impact. This primary focus realigns with the targets of the aforementioned SDGs 9 and 13, further emphasising the significance of ICTs within the global sustainability equation.

In parallel with these developments, a dedicated research project is being carried out on standardisation from an ICT perspective, involving comprehensive content analysis of almost 200 standards from International Telecommunications Union and Internet Engineering Task Force members. This innovative methodology helps position the study within the wider spectrum of standardisation studies, overcoming the confines of ICT-specific research and implying broader applications for standardisation.

Alongside this larger project, a smaller but related initiative is underway. Its objective is to understand the workings of these organisations within the extensive potential of the ICT standardisation sector. The ultimate goal is to develop a focused action framework derived from existing literature and real-world experiences, underlining an active approach to problem solving.

Collectively, these discussions and initiatives portray a comprehensive and positive path globally to achieve harmony between ICT and sustainability. Whilst there are inherent challenges to overcome in this journey, the combination of focused research, standardisation, and collaborative effort provides a potent recipe for success in the pursuit of sustainable innovation.

Session transcript

Moderator:
Yes, good. Okay, thanks a lot. Good morning, afternoon, evening, good night to many of the people here. Thank you very much for coming. This is the GigaNet Academic Symposium, which as tradition has been going since 2008 at the IGFs. 2006? Oh, the first one I did was 2008. Sorry, Milton. Thanks for the memory. Since 2006 at the IGFs, we’re very grateful to the IGF Secretariat for facilitating this meeting and getting us into this jam-packed program for this year with lots of exciting panels going on. We have a very exciting conference for you today as well, and happy to see so many faces in the room and quite a few people online as well. Just a bit of a summary of the background for this year’s symposium. We were set up, we were informed of the date for the IGF, and then went through our rigorous academic procedure for selecting abstracts that emerged as a result of a call. We had 59 or 60 abstracts submitted to the workshop, and we were able to accept only a small number of these. Thank you very much to everybody who participated in this whole process of submitting an abstract. Thank you very much to the members of the program committee who actually spent time reviewing the abstracts. It’s not easy to do this, so thank you very much. much to you as well. And thank you to the presenters for actually making their way here or staying up very late or getting up very early during today. Since we’re running a bit late, I’ll cut my presentation there. But I’m also, sorry, I don’t know, somebody else, did somebody else want to say something? No. I’m also acting as the chair and discussant for the first panel, which is taking place right now. We have, and I’ve made a list, we have four presenters, two of whom are here in the room, and two of whom are on site. We will start with a paper by Yik Chan-Chin from Beijing Normal University. Yik Chan-Chin is also a member of the steering committee for the Giganet Association. So Yik Chan-Chin will be talking about the right to access in the digital era. Then afterwards we have a paper that will be presented by Vagisha Srivastava on WebPKI and the private governance of trust on the internet. Vagisha is from Georgia Tech. And then the third paper will be on internet fragmentation and its environmental impact, a case study of satellite broadband, which will be presented by Berna Akaligur, who is from Queen Mary University in London, and she’s sitting on my immediate left. And then the last paper presented in this panel will be on ICT standards and the environment, a call for action for environmental care inside internet governance, which will be presented by Kimberly Anastasio, who is at American University. in the US and online. OK, without further ado, I will pass the floor to Yik Chan. You now have your 10 minutes to describe your paper. And we’ll move into the next paper immediately after that. OK? Thank you very much.

Yik Chan Chin:
OK, thank you very much, Jim. Can you hear me? Can you hear me now? Hello? Yeah, OK, thank you. Yeah, this is Yik Chan from Beijing Normal University. But actually, I’m in London. So this is a 2 AM morning in the early morning of London time. OK? So my presentation actually is about right to data access in the digital era, the case of China. And first of all, I would like to contextualize this debate of data access in the Chinese context. So first of all, we talk about why the debate of the access, collection, and dissemination of data become the center of the academic debates and the policymaking in China. Because there are three factors contribute to this discussion. First of all, is the internet and the data perceived as the important driving force for economical development in China. Secondly, is the rapid development of platform economy. And also, the mass production of data has raised the governance problem in the storage, transmission, and the use of data in China. Certainly, it’s because the role of a digital social media platform in data access and dissemination has strengthened the public demand of governments to act on the protection of right to information in China. So for those reasons, the academic debate and the national policy of access to data become where we hit it, the center of the policymaking in recent years. years. And also we found that the conceptualization of right to access to data in China and the formal informal rules, which is related to the legitimacy of the right to the public epistemic right to data is quite interesting. So that’s why I focus my study on the relation between the digital access, the right to access to digital data and the epistemic right. And so the data I use in this paper, including the national government’s policy regulation, and also secondary data. And so what is the epistemic right? So this is a right actually closely linked to the creation and the dissemination of law. It’s not only about the informed, but also about being informed truthfully and understanding the relevant of information and acting on it is based for the benefit of themselves and the society as a whole. So this is a concept, a start with, and also the epistemic of right is emphasized under equality, such as equality to access and availability of information, the knowledge equality in terms of obtain critical literacy in information communication. So, and also we need to understand the concept between data, information, and knowledge that interrelated concepts. So data is a set of symbols and kind of the representation of the Royal factors, but the information is organized the data and the law is understood, understood information. So these three concepts are interrelated. So therefore data is the form of law is create social process. So it’s a kind of a social constructed. So therefore it’s interesting. to see how the different social agents participate in the construction of the access right to data as a part of the equation of the social knowledge. So in my paper, I define different type of data such as data packet and price commercial test. And so I’m not going to details because of time limitation. So I define the right to access to data including two element. The first element is a right to access to public information, which is recognized as individual human rights by many jurisdictions and human rights body. The secondary is inclusive right for all member of society to benefit from the availability of data. So this is a mighty definition of access to data in my paper. So at a global level, there’s a different debate about a right to access to data. For example, we got academia recognize the data as a, it’s not a public good, but a leverage resource. But we also get other academic like Victor from Schumpeter from the OII in Oxford, which defined the data as a non-rivalry information and a public good. So it’s open to open access. And we also have like a European commission and the World Economic Forum. They provide different strategy to how to access to data including data access for all strategy or like what the economical platform they want to create a data marketplace service provider. So therefore the right to access to data can be traded in an open, efficient and accountable way. So therefore it’s tradable, data is tradable and they can be managed by a platform. provider, where European Commission’s approach is more like data access for all strategy, and the business to governments data access have a different requirement, and also creation of common European data space for important areas as well. So, look at the Chinese debate. The Chinese debate is interesting because they never treated right to access to data as an independent right, but as a part of the right to information, and also treated as a data as a property right in China. This data is not treated as a kind of right to information, or no, it’s not treated as an individual right, but it’s treated as a right to information or as a property debate in China. So, in terms of the public information, if the data is owned by government, so there’s a different approach. One is that this is the public data, which is public good, it’s owners, it belongs to all people. So, the second approach is the data, public data should belong to the state, and the non-public data, like a personal data, should be subject to personal protections. But there’s no debate about what is the right to access to the personal data, it’s not explicitly discussed, and also the equality nature of the epistemic right, such as equality to access and availability of information and knowledge has not drawn much Chinese debate attention as well. So, data access right, therefore, in China is treated as a property discussions. So, they want to formulate a trading system, so that the data can be traded to generate value, and so that is their approach. And this kind of definition… information actually is also triggered by the government policy project and the utilization of big data. So therefore Chinese academic debate are heavily policy driving in this sense, because this is the debate and that the position of the Chinese academia is heavily triggered by the government’s policy and there is policy driving as well. So very few of the academic debate actually support public good nature of data and the support of data sharing should be the default position and the control of access to data require justification because data is a natural public good. So therefore we can see the debate is pretty different from the other side in the global law. So there is the policy, the Chinese government’s policy regarding to the data, how the access of the data. So from 2015 to 2020, there’s a different action plans and the big data development plans. And also the most important plans is opinions on building a better market allocation system and the mechanism for factors of productions. And also this building a data space system for the better use of data as a factor of production. So basically the policies defined data property right consists of three rights. And so they treat data as a property and the data has a property right. But this property right consists of three rights. So like one is the access right, process right and the exchange right. So the property right is divided into three rights in the Chinese context. So, and in here we want to look at the definition of the how. How do they provide the right to different data? For example, the public data, this is the data generated by the party and the government agents, enterprise and institution in performing their duties or in providing public service. So the access of policies strengthen data aggregation and sharing. So you can access to this public data, but you need to authorize. And also there’s a conditional fee access, but also for particular data, you have to pay for it, okay? And so, but the public data is not to be accessed directly. They must be provided in the formal of models, product or service, but not in original data site. You cannot access to the data set, but you can access to the model or product or service, you know, generated based on the public data. So the second is personal data. The personal data is about personal informations. And so they have process, a process they can collect, hold, host and use of data with the authorization, but those personal data has to be normalized. So to ensure the information security and the personal privacies. So protection of the data, right to data subject to a copy and the transfer of the data generated by them. So you have a right to access this personal data, but you can only obtain a copy or transfer the data generated by the platform to other platforms. So this is the right offered to access to personal data. But for the enterprise data, data collected in the process by the market in production and business activities, they do not involve personal information or public interest. So they recognize and protect. the enterprise right to process and use of data and protect the right of data collector to use data and obtain the benefit, protect the right to use data or process data in commercial operations. And they also regulate authorize of the data collector for third party. And the original data is not shared or released, but access to data to analyze as shared. So government agents can also obtain enterprise and issue data in a coordinate of law and regulation in order to perform their duties. So this is the right to access to enterprise data. So the conclusion is that, first of all, access to data is not a defined aspect of the epistemic right. But in the Chinese context, they have a different but also similar interpretations. So because of the epistemic right in the Western academic literature, they are more from the sociological nature of the creation and dissemination of information knowledge. So the right are underpinned by the normative criteria of equal access, the availability of information and knowledge and the use for the benefit of the individual and the society as whole. So data is treated as a form of knowledge. It’s a non-verbal information good, a public good for the benefit of all. So therefore, open access and sharing of non-confidential data is proposed. So in the Chinese context, epistemic right has not yet drawn any attentions of Chinese academic debate. So the close related concept to the epistemic right is the right to information. But this right to information is approached from a legal perspective rather than from the sociological perspective. So they are stressing on the commercial right and also the public right to information right to the public data. So data is defined as one kind of factor of protection for the national economic development. So this is very tricky, because in the Chinese context, data is defined as a factor of production. You know, there’s a four factor of production, like a labor, lands, you know, and the capitals. But the data is defined as a fifth element of a factor of productions, which is very unique. So, and so therefore, the data is, has a non-variable and the non-sort. So they recognize data is a non-variable, and so it’s a character, you know, but they do not, data cannot be circulated in the market and like a lands, labor, and the capital. But the public good nature of the data has not been recognized in the mainstream academic publication of the government’s policy. So before, because of this, the public good or eco-assets dissemination of data are not mentioned in the public policy making. So under this kind of permiss, so therefore, data collection, analyze, and the process are aimed at unlocking the potential commercial value of data, especially for enterprise data, and define the various kinds of the data are focused on what then we go debate and the policy contextualizations. So therefore, the data, you know, in the data debate, in the Chinese policy and the academic debate, the focus is the right and the interest of data enterprise and not the individual right. The power imbalance between the individual and the corporation and the sharing of the benefit derived from data with the individual user and the data subjects has not been addressed. I think that’s all my presentation, the argument of my paper, thank you very much.

Moderator:
Thank you, thank you. And very close to time, so. we’re starting off well. We’ll move. Thank you very much, Yik Chan. I hope you can stick Okay. Okay. Okay.

Vagisha Srivastava:
Hello, everybody. Good morning. I’m going to talk about Web Public Key Infrastructure and the Private Governance of Trust on the Internet. I know it’s a cool title. All credit goes to Dr. Milton Mueller here in the audience, Dr. Carl Grindle, who couldn’t join us in person but is likely joining us virtually. Hi, Carl. And me, the lovely Ph.D. student you would want to have around. And of course we would like to appreciate the generous grant from the ISOC Foundation for this research. I am going to tell you the story of Internet security. And like all good stories, this one, too, starts with a tragedy. There was a security breach. A company called DigiNotar misissued 500 certificates on the Web. It was later identified as a man in the middle attack by Iranian hackers. But the company took no action for two months into the breach until the Dutch government intervened. I was only found out because one of the misissued certificates was for Google. So what is DigiNotar, what’s a certificate, and why is the story a significant plot point to what I’m talking about today? Okay. So we are all familiar with this. When a user types a Web address on the browser, the little lock sign tells us that the connection is secure. HTTPS that we have heard about enables a secure channel for communication. But the browser still needs to identify if the server or the client is in fact who they are claiming out to be. And it is done using digital certificates. Digital certificates authenticate clients on the Web. Certificate authorities or CAs issue certificates to website operators upon request. So issuing certificates, they’re supposed to verify the identity of the entity making the request. And the certificates then act as a recorded attestation. that the holder is, in fact, who they’re claiming to be. WebPKI is a web-based component that supports documents digital signing, signature verification, and document encryption, such as the certificates using the public key cryptography or asymmetric cryptography. Now, before we get into the details of that, let me first lay out what we are trying to do in this paper. There’s a bunch of literature out there from the technical community and the workings of WebPKI and the security or lack thereof, provided by certificates and certificate authorities. We’re looking at it from the governance perspective. We’re questioning the commonly held notions of public good and its delivery mechanisms. Public good is often provided by the government, private good provided by the market. And we are situating the governance of WebPKI specifically within the framework of public good being provided by private actors. We argue that the production of public good and some non-public goods require collective action, but not necessarily state action. Governments are but one vehicle of providing these goods, not the only one, and definitely not the most efficient one out there. The paper offers an innovative perspective on the dynamics of public production of private goods in the context of internet security. OK. One more slide. We use the framework of institutional analysis. We identify the public good in question. Then we identify the stakeholders, talk about if they cooperated or compete to achieve the said public good. Did they overcome the known barriers to collective action? We then describe the rules within which these stakeholders group institutionalized, and finally use some data collected to assess the efficacy of the institutions in achieving the desired result. That is the enhanced security. OK. Shifting gears again. If there are only two parts. parties communicating over the web. As long as these two parties can authenticate each other, the adoption and the use of encryption on the public web does not require any special form of institutionalized collective action. The hard part is the authentication process when there are multiple servers and multiple clients required in the process. It requires a reliable and trustworthy mapping of the private key holder to the public key. In the WebPKI ecosystem, digital certificates facilitate this mapping. When a server presents its digital certificate, which includes a public key, during a secure connection setup, the client can verify a certificate’s authenticity and trustworthiness. Split key cryptography eliminates the need to transmit private keys over insecure networks. However, it also creates an impersonation problem and certificates solve it for the web. A mismatch between the two, that is the private key and the public key, enables a man-in-the-middle attack. That is something that we saw with the DigiNotar incident. But this brings us to the question, how do we trust a CA to not be a bad actor or, worse yet, a compromised actor? There’s a chain of trust that enables us to trust a subsequent CA, where each subsequent CA or the intermediary CA has to comply with a certain set of policies set by the browsers. The endpoint, a root CA, is maintained in a root store by browsers and operating systems and has to go through a complex vetting process to be included in the root store program. Now, we have established the authentication is public good. Let’s spend a minute understanding why collective action is required. The web ecosystem as a whole needs effective authentication across the board. Security is not a private good, because a compromised certificate or a certificate authority has the potential to affect any website or any users across the system. alone have the incentives to provide it themselves and can be motivated by several factors. Browsers and operating systems cannot be responsible for screening every single website on the web. The digital ecosystem depends heavily on the trust to work and so needs authentication mechanism to be applicable everywhere and does need collective action to be enforced. Who are the stakeholders in the ecosystem? We identify four. Security risks are most concentrated on the top at root stores in the browsers and they have diminishing systemic effects as you go down. There are hundreds of certificate authorities, millions of subscribers who get the certificate and billions of end users and individual devices who rely on these certificates for authentication. According to Mankar Olson, collective action is costly. The coordination and communication costs and the bigger the group, the more the costs rise. The institutional solutions to the collective action problem for WebPKI focus on the top of the hierarchy, that is the browsers and the CAs, and it does not try to directly involve subscribers. The root stores act as a proxy for the end users and the certificate authorities act as a proxy for subscribers. We identify three institutional vehicles, the main character of our stories here. The certificate authority and the browser forum, which we’ll be talking about in a minute in more detail. The certificate transparency, which is an Internet security standard for monitoring and auditing the issuance of digital certificates through decentralized logging and ACME, or the automatic certificate management environment, which is a communication protocol for automating interactions between the CAs and their user servers. Okay. So the certificate authority and the browser forum. Remember the DigiNode slide? Well, I might not have been completely honest when I said that the story started with that tragedy. It started a little bit before that. Narrative privileges. From 1995 to 2005, certificates were being issued with virtually no standardized governing rules in place. The Certificate Authority and the Browser Forum was founded in 2005. But in 2012 was when the forum started actively making rules for the system. Since 2012, the forum has produced a set of baseline requirements for CAs that tackled convergence of expectations between the browsers and the CAs on issues such as identity vetting, certificate content and profiles, certificate revocation mechanism, algorithms and key sizes, audit requirements, and delegation of authority. The baseline requirements have been revised about every six months by means of formal ballots approving amended text. The second part of our methodology involves studying the CA Browser Forum in particular. We first collected data from the forum meetings. This included attendance records, meeting minutes, and we also had 10 semi-structured interviews. To capture the market share, we used random sampling to sub-sample around 2 million domains data from the Common Crawls database. The CAP Forum was described by one of our interviewees as a place for the root stores to coordinate their policies so that they don’t create conflicting policies and to get feedback from the CAs directly. While we do see on the chart that there are more European members than US member organizations within the forum, US participants are more active when it comes to participation. We tracked the activity of different stakeholders in the forum, and you can see an increase in participation following 2017, which was because of an addition of a new working group. Between 2013 and today, the browsers have become. And we also note the active participation of the U.S. federal PKI. There are many economic conflicts of interest between the browsers and the CAs, but we can see from our analysis of the voting records that in 92% of the ballots, a majority of both stakeholder groups supported or opposed a proposal. In only 2% of the cases did the browsers favor a proposal that was opposed by a CA. We see this data was collected in February 2023. We see that Let’s Encrypt, which is a civil society effort to encourage the use of encryption and to automate the issuance of certificates, is dominating the market. This didn’t used to be the case before, but it clearly showcases how automation has led to increased adoption of DV certificates. If externalities caused by poor CA practices are the main drivers of collective action, we should expect to see the gradual homogenization of the root stores across browsers produced over time. We do see substantial overlap in which root certificates and the browsers admit the root certificates into the trust store. We also see a gradual reduction in the number of trusted certificates within the root store over time. The measure of efficacy is interesting here. We see that the encryption and the web has increased significantly over time. We also observed that misissued certificates have decreased over time. However, while the global misissuance rate is low, this is predominantly due to the handful of large authorities that consistently issue certificates without error. The three largest CAs that we identified in the market share, Let’s Encrypt, Cloudflare, and cPanel, signed 80% of the certificates in the data set and have near zero misissuance. mis-issuance rates. Now, perhaps the most important finding, why have private actors taken the lead in security governance in this case? Well, governments are politically structured such that they cannot represent a global public interest or produce global public goods easily. The authority is fragmented, and there are numerous rivalries among them, especially when it comes to cybersecurity. The platforms have a greater alignment with the security interests of their users than national governments. An insecure web environment hurts their business interests, while governments are not directly harmed. Also, they often have a strong interest in undermining encryption and user security for surveillance purposes. The implementation of WebPKI involves an elaborate web of technical interdependencies. Security measures impose costs and benefit upon all four stakeholder groups. Those directly involved in the operation and implementation of WebPKI standards are in a better position to assess the cost and the risk of the trade-offs and make rational decisions. But the government is not entirely absent. We see USFPKI organizations participating actively. We have observed from the meeting minutes and the interviews that EU is pushing for the EIDAS regulation on this ecosystem. We also see involvement of national CAs, mostly from Asia, representing the interest in the forum. Now, why should you care? Well, a lot of times when insecurities happen in the web, the blame is often put on the users, because they engage in unsafe practices. You remember the always proceed option that is available for the certificate mismatch? This is not a perfect system. There are still compromises that can happen because of misaligned incentives, or just oversight because of redundancy of the process. In some cases, these could be intentional, example selling of backdated certificates. But it’s always better to know a little bit more. All of the good things about the internet relies on this ecosystem. For example, in- Ensuring cat photos are linked to a secure server, which they claim to be when you’re surfing the web. The topic is also understudied within the Internet governance community and hence would be of relevance to the scholars present in the room. Okay. So like all good stories, this doesn’t end here. We will possibly have a bunch of sequels. We are planning to do a study about how governments intervene in the system, get in more detail. We also would like to have the measure of effectiveness of certificate transparency that we mentioned in the institutional vehicle part and maybe check out if the impact of automation or the ACME on the system. That’s all. Thank you very much.

Moderator:
Thank you very much, Vagisha, you, again, you did the timing very well. Thanks a lot. We now move to Berna . I will just set up your slides on my computer and in the meantime, I give you the floor. Okay. Do you need your computer? Okay. Thanks. Thanks. Thanks. Thanks.

Berna Akcali Gur:
Thank you, Jamal. So this paper is one of the outcomes of our research project funded by the Isaac Foundation. global governance of Leo Satellite broadband. In that project, we focused on the jurisdictional challenges to the integration of mega-satellite constellations to the global Internet infrastructure. So the report resulting from that study can be found on the website that I’m sharing in our PowerPoint. There you can see the link for a separate ISOC project on Leo Satellite connectivity. The ISOC group assessed the subject from a purely policy perspective, and we joined our forces at times, and I recommend the report as well. Now as we were conducting that study, the satellite broadband industry picked up pace. More and more applications have been filed at the International Telecommunications Union for new mega-constellation projects. While there’s a certain degree of excitement about them, the scientists studying space and astronomy have raised their voices about the impact of these projects on space sustainability and space environment. So we decided to analyze the tension between the competing interests that are universal broadband connectivity for sustainable development and cyber sovereignty on one side, and sustainable use of space resources on the other. Of course, from a law and policy perspective, which inspired this paper. It’s still a draft, so we welcome any constructive feedback. So what is new about satellite connectivity? We all know that space technology satellites have long been a complementary part of the global communications infrastructures. Most often, they have provided last-mile solutions in remote and sparsely populated areas, such as islands or villages and mountains, because these areas are not easily served by terrestrial networks. And also, we shouldn’t forget, we still use them when we are in transportation, such as ships and planes. So communication… satellites are not new. The idea of multi-satellite systems, the constellations, are not new either. Earlier constellations in the low Earth orbit had emerged in the 90s. Orbcom, Iridium, Globstar are examples. These consisted of smaller number of satellites and they provided speech and narrow band data. They were not viable businesses for mass consumption. They were expensive projects and they couldn’t compete with the speed and capacity of the terrestrial networks, so they didn’t really receive much attention. Recently, advances in communications and separately space technologies, dramatic reductions in launch costs, financing by the technology sector, and most importantly, the ever-growing broadband demand, drove a second wave of satellite constellation ventures. These are very ambitious projects with increased number of satellites. Some leading examples are 42,000 satellites planned by the US venture Starlink, 13,000 satellites planned by the Chinese venture Guowang, and 648 for UK and India venture OneWeb. So newness in the sense is in the scale of these projects. So how do these ventures relate to sustainable development goals? As you all know, for most social, economic, and governmental functions, the use of applications enabled by the low latency, high bandwidth connectivity has become even more essential. Low latency is particularly important for the web-based applications that require high speed. Some applications that I can mention are Internet of Things, video conferencing, video games. The new constellations are able to match this requirement because the data travels much faster when the communication satellite is in use. in the low-Earth orbit simply because the distance is much shorter. So, the promise of broadband connectivity with minimal terrestrial infrastructure is almost miraculous from a connectivity as an enabler of SDGs perspective. That is why the emergence of these satellites have been met with enthusiasm in the context of their potential contribution to bridging the global digital divide and global development. But how does the system work? Are these satellites infringing on territorial sovereignty of countries by providing Internet from the skies? Well, we should first understand the technicality behind this to understand how the domestic regulations work. So, the ground stations, they act as a gateway to the Internet and our private networks and the cloud infrastructures. Currently, the distance between the ground stations is required to not exceed about a thousand kilometers. The second component is the user terminal by which the users connect their devices to receive broadband services. These are provided by the satellite company operating the system. Additionally, satellites need an assigned frequency spectrum, a limited natural resource, as the satellites communicate with the Earth through these radio waves. The user terminals will link to the satellite in closest proximity, which may be a different satellite in the constellation at a given time. That satellite will be connected to other satellites, one of which will have a connection to the ground station. Then, there is a cloud infrastructure. The satellite companies will use cloud infrastructure, which is a mutually beneficial relationship as the cloud infrastructures benefit from their connectivity as backup to their existing setup. As I said, the provision of satellite services is not limited to the cloud. within a particular country is subject to that country’s laws and regulations. These are called landing rights and the countries decide the terms of landing rights for themselves. For example, the ground station. For that, the companies will need authorization from each relevant jurisdiction. They will also need to obtain a license to use the frequency spectrum. If they provide their services directly to consumers, they will also likely need an internet service provider license. What is more, the importation of their user terminals will also be subject to import requirements of the national authorities. So the provision of satellite broadband service by a company is subject to a wide range of laws and regulations of the host country. For example, Russia and China have already declared that they will not allow the provision of satellite broadband by foreign service providers. The countries with space capabilities felt that it would be better if the existing domestic control mechanisms were complemented by ownership and control of their own mega constellations. These have been frequently referred to as sovereign structures. Competition is perceived to benefit markets and end-users. So at first glance, it seems like we have more of a good thing. With more choices for all, business models will mature and that should be celebrated. But when we look at the reasoning behind these investments, the governments emphasize their strategic value and the significance of sovereign alternative infrastructure for digital sovereignty and cybersecurity purposes. The financial viability of these ventures is still not certain, so there isn’t much emphasis on that. The digital sovereignty and cybersecurity concerns incentivize countries and regions to align their communication infrastructure and is controlled along their borders. I’m looking at Milton because he coined the term alignment as fragmentation. these ventures are also manifestations of the ongoing fragmentation. Okay, so the foreseeable harms of the new competition to space, particularly the orbital environment, are grave. From launch emissions to orbital debris, the current regulatory framework is simply not sufficient to tackle the problem in time. In time is the operative word here. Due to exponential increase in the number of space objects, space traffic has become more challenging to manage, and more collusions are anticipated. The space environment is becoming more prone to collusional cascading, which means that once a certain threshold is reached, the total volume of space debris will continue to grow. This is because collusions create additional debris, leading to more collusions, creating a cascading effect. Such a catastrophe may render not only the low-Earth orbit, but almost all space resources inaccessible for all, even for future generations. So because there was a competition among powerful nations to each have as many constellations as they could afford, the orbital environment may become unusable for any service, including connectivity and travel. So future generations may be locked in, trying to figure out how to clean up the orbits and restore them. Space resources are limited resources. Orbital space is already congested. Space traffic is difficult to manage, and there is a risk of collusional cascading. So is the promise of constellations, especially the multiplication of space-based internet infrastructure, worth the risks we impose on space environment? So internet governance scholars have known for a long time that advancements in most information communication technologies are perceived in terms of… in terms of their potential impact on global power structures. Megasatellite constellations are also deemed strategic investments, both in terms of space presence and in terms of influence and control over global Internet infrastructure. And I’ll just skip this part. So, the efforts have been deemed to have significant impact on the openness and unity of Internet. But now the same can have an impact on… Anti-fragmentation efforts are deemed to have a significant impact on the openness and unity of the Internet. But now the same can have an impact on the sustainability of space resources. So, we argue that the impulse to compete in the Earth’s orbits, a space that is already congested, should be mitigated in consideration of preserving sustainable orbital environment for future generations. Environmental efforts of global multi-stakeholder Internet governance platforms could inform environmental and sustainable outer space governance efforts, especially as they relate to space-based Internet infrastructure. Thank you.

Moderator:
Thank you very much, Berna. And yes, the timing was perfect. I will now move quickly to Kimberley. Kimberley, you are online. Yes, you are online. Can we please make Kimberley Anastasio a co-host? Thank you. And Kimberley, the floor is yours. Oh, cool. And we can hear you. We heard something.

Kimberley Anastasio:
All right. Can you confirm that everything is working properly? Wonderful, thank you. All right, hello everyone. And thank you to the GigaNet organizers. It is my pleasure to be here today talking to you about a project that is part of my dissertation research at the School of Communication at American University. And this project addresses the intersection of information and communication technologies, ICTs, and the environment focusing on ICT standards. We’re meeting now at the IGF and the Internet Governance Forum to set the environment as the main thematic track for the first time in 2020. And it is definitely not alone in such an endeavor among internet governance organizations. Recently, there are plenty of standard setting organizations, organizations that are establishing rules for how ICTs work and how information circulates on the internet. They’re also turning their attention to environmental concerns and working on the creation of what they call quote unquote greener internet protocols. This paper that I’m presenting today is this first step towards this broader research project on this ICTs standards, in which I talk to people working on ICTs in relation to the environment on the implications that standards can have for enabling and constraining environmental rights. Meaning here, the right to a healthy, clean, safe, and sustainable environment as established by the United Nations Environmental Program. And this is a research that is rooted as part of this communication field called environmental media studies, a field that addresses this overlapping spheres of environmental issues in the production and uses of new media, including ICTs. But still among the researchers that are part of this environmental media studies fields, focus tend to be more on data centers, on AI, on the things that are more closely visible to internet users. And one niche, but fundamental part of ICTs is usually overlooked ICT standards. So I’m now joining two infrastructure studies that deal with this more hidden layer of ICTs. Moreover, we know internet governance scholarship for a long time has been examining internet standardization processes by seeing ICT standards as things that are not just technical but also things that are political. But when we deal about with the values and politics that can be inscribed into internet governance artifacts such as protocols we usually focus on those rights that are more closely related to the digital world like freedom of expression or privacy and data protection. But environmental rights should be considered as this another example of how politics and rights are embedded into internet governance and how then ICT standards may be another venue where the politics of around the environment are enacted. The broader research is an analysis of both the international telecommunication union and the internet engineering task force and the work that they are doing that are related to the environment. For this paper I relied on semi-instructed in-depth interviews that I did with 18 interviewees that are experts that have already advocated for environmental concerns about the internet and its infrastructures or that have done this or they’re currently doing this. So when I talk to the people that are already working on this umbrella between environmental rights in the context of ICTs I ask them about their knowledge about standards and their perspective on where ICT standards could fit the broader agenda and despite most of them mentioning that they have very few knowledge on standards their answers ended up echoing both what is being said on the literature but also on echoed some things that some standard setting organizations are already working on for a while. So echoing the literature interviews situated this debate about ICTs in the environment in these two parallel understandings of the technologies. On one hand digitalization allowing us to enable a more sustainable economy so ICTs being employed basically to tackle climate change and then on the other hand focusing on the negative aspects of digitalization. So how digitalization by itself can also have an impact, be it actually positive or negative on the environment by itself. But in the end, what most of them noted is that to act on this intersection between ICTs and the environment, one should account for both things. So how these standards can help the sector be more environmental friendly, by enabling other things to happen among other sectors, but also how environmental friendly the ICT technologies themselves can be. And then when it comes to establishing what roles these ICT standards could play in enabling to account for both these two things, the promises and the pitfalls of digital technologies, the experts highlighted two main areas of action. So basically establishing a common language or parameters for dealing with this issue, but also establishing mechanisms for accountability. So, and both of these things were mentioned in relation to the standards that would help us avoid carbon emissions in other sectors, but also in standards that are trying to account for or cut down the environmental impact of ICT themselves. And at the center of this discussion on intersection and the role of standards in this case, is basically this necessity for quantification and addressing the materiality of ICTs. And also the fact that these conversations that we’re starting to have more on the standards setting organizations are kind of leading the game. And they come in a context in which the mindset of the ICT sector is one of evidence and consumerism. And we know that quantification is a vital part of what standards is, be them ICT related or not, because the standards are the things that define the procedure. they regulate behaviors, they ensure interoperability, and for that, quantifying, classifying, formalizing processes is key. But when it comes to measuring the environmental impact of ICTs, be it from any perspective, so software, hardware, or networking, this is not an easy task. And this means that even when people do recognize the physicality of the internet and the impact that ICTs can have, there is no simple way to quantify its relation to the environment. Be it from, again, the carbon footprint, the energy consumption, the natural resources extraction, disposability, and things of these sorts. But one thing that we have to keep in mind is that materiality is more than just this palpable thing. As seen on the slide, materiality also refers to this shape and affordances of the physical world, but also the social relations that are part of our lived reality. So we address ICTs as something that is physically located and situated in the environment, although it is surrounded by discourses of immateriality. But to act on this issue does not necessarily mean that we should be stuck if we’re not capable of precisely measuring this entanglement of ICTs in nature, or that we should stop at the measuring phase alone, precisely because we recognize ICTs as something that is relational. An interviewer responded to something similar like that when they identify what they believe to be the root of the problem. The root of the problem being not the environmental impact of the separate devices, products, services, the separate standards themselves, but the socio-economic model behind how society deals with ICTs. One interviewer, for instance, said that standard setting is really important because it allows for environmental best practices to come in and enter its way at the technical level, but that would be in direct competition with the business as usual business model of the entire sector. But some standard setting organizations are already engaging in environmental related discussions, both by creating standards that relate to the environment, but also as organizations themselves engaging in these discussions in other settings. And the two organizations that I’ll be studying further for my dissertation, as I mentioned, is the ITU and the IETF. The ITU has already more than 140 standards that are related to the environment. It has one study group that’s called Environmental Circular Economy that is dedicated to dealing with these issues. The mandate is to work on the environmental challenges on ICTs. And the IETF is something that is more recent, since the ITU has been following the UN Sustainable Development Goals for a while. The IETF is now catching up on this issue as well. It has almost 20 standards that are more closely related to environmental issues. And it also created recently a group that is dedicated to addressing sustainability issues in relation to ICTs among them. Just to mention a couple of examples, the IETF has been dealing with a protocol for Bluetooth to turn Bluetooth less energy consuming from the perspective of internet of things. The ITU has several measurements on the carbon footprint of the ICT sector. And as I mentioned, the scholarship have already established that the standards are these political things that can incentivize or constrain certain behaviors. Two important ICT standard set organizations are already engaging and increasingly acting on environmental matters. And the next step for this research is to delve into the work that they’re doing and try to investigate what areas they are trying to tackle, what interests are also being addressed there, and how can we move forward with this agenda even beyond these two organizations and further down on their standardization process. Thank you very much and I’m available for any questions or comments that you might have.

Moderator:
Thanks very much, Kimberly. Right, we have a few minutes and I will abuse my position as chair and hope that the Daniel, the next, you don’t mind starting a couple of minutes later. Just so that, because we started late. Right, so I will first of all just make some comments on the papers that we heard and the papers that we received from you. Thank you very much for those. Before I try and go with some leading questions and hopefully that will stimulate a bit of a discussion. So if you have questions in the audience, already start thinking about how to formulate them and pray that I don’t raise them first. I’m sure I won’t. Okay, so I will go through the papers in the order that they were presented. So Yig-Chan, who is still online, congratulations. It’s four o’clock in the morning or something, so congratulations. Really interesting paper. The fact that it’s already published means that my comments are moot, but I was thinking of how you could actually take this further. I mean, there’s lots of many interesting statements in your paper and the way you actually position the debates around this and almost show the similarity between the different approaches that you see in the different regions of the world that you looked at. Very interesting. I would have loved to have learned more about your reflections on the sociological nature of that, that whole data divide. So data, knowledge, information, and so on, and see how that fits in. One of the things that kind of touched me in the paper was the way you explained how those differences and those similarities come out. And so I was very happy to read that, and that stimulated a lot of thought in my head at least. However, I would have liked you to have been a bit more argumentative in that sense. You laid out some of the conditions, and you showed that there are differences and there are similarities, and it would have been nice to see how that plays out in different policy debates that are going on. Because I know that the EU has its data for strategy, strategy for data, and I’m sure that that plays, you know, you could do a really interesting policy analysis on that, and that might be a next step that you want to go for, to actually try and unpack how these reflections on epistemic rights and so on actually play out in the policy field. That might be really kind of interesting to see how that comes out, because of course On the one level, there’s a lot of conflictual discourse around the different approaches, but what you show is that there are some fundamental similarities as well, and it might be interesting to do that. I was also thinking that there are maybe other regions in the world that have different approaches to data in that sense, and it might be interesting at some point to also reflect on that, maybe, you know, in the paper, in the next publication, maybe interesting to have a section that looks at global approaches, and maybe I know that in Japan they have a different approach to data, to treating data in that sense, so that might be interesting. Interesting. Vahisha, your paper, and Milton is in the room as well, I could definitely see the questions around the governance of this, and congratulations for that, because I think also in the paper as I read it, I felt that you were not struggling with it, but it was something that you said, okay, I want to look at this as a governance question and not a technical question, but I need to spend lots of time explaining the technical issues in order to understand the governance side. And so although you focus on the fact that you want to do a governance paper, as I was reading it, I felt a lot of the technical knowledge, which was very useful, but actually left little space or mind space for the reader for those bigger governance questions, which are really interesting. I also was thinking a bit about how you addressed, so how you said you talked about your narrative prerogative, and how you addressed the story from 2019 first, but that was maybe the problem that made the context and the issue visible. So maybe that’s how you do that. You don’t have to say, I lied, right? I was also wondering, so you mentioned that there are different voting, you said that actually the platforms and the certificate authorities agree on a lot of things, right? I was wondering, the process leading up to the votes, that would be really interesting to understand, right? And then, of course, you mentioned that it’s a private organization dealing with public goods, and I’m going to ask the question that you asked us to ask you. Can you please explain how government are involved in that, because they are not explicitly involved, but they are involved, right? And that would be interesting to see, because, of course, there are multiple dimensions to these stories, and I would have liked to have heard a bit more, or teased you a bit more in that sense. Berna and Joanna, thank you very much for your paper as well. When I first read the title, I thought that you were trying to do a lot in this paper. It’s covering low-Earth satellite, low-Earth orbit satellites, it’s covering environmental issues, it’s covering cyber security issues, it’s covering quite a lot in the paper. And I found that, at first I was thinking, wow, how are they going to do all this? But you managed, so that was good. I was wondering a bit, when I looked through the paper, I felt that the ordering was sometimes, there were some bits that probably could have gone a bit earlier, in order to help me understand the flow of the paper. So I’ll give you some examples later, but, I mean, for example, you introduced the concept of mega-constellation, and I didn’t know what that was until I’d read two pages later. So things like that. But also, in the way that the argument builds up, I was wondering, Section 4 may be more interesting than Section 3, and vice versa. There are, you’ve mentioned, rightly, the security concerns, right? But I was wondering, so there are also, the security concerns and the sustainability concerns actually cross over quite a bit, I think. Because if somebody were to shoot one of these things out of the sky, and if then the cascading effect happens, I was wondering, you treat them as two separate things. And I was thinking it might be interesting to also show that there are direct connections between them. those two. And then you, of course, you go on and you talk about the ITU, but I know that there have been international collaboration efforts. I know the European Union has been trying at least for a long time with the space policy to develop things, and I was wondering, I didn’t really see mention of that too much, and I thought that might be interesting to bring in, because that then addresses the questions that you had raised in the tensions between the national and the global, right? And there I would be interested, you talked about sovereignty as, or states using sovereignty to say we need our own mega constellation, but then in the end that still needs a coordination effort, right? Unless they want to knock each other out of the sky, right? So that might be, that’s also something that I think you could raise in your paper a bit more, okay? And then in terms of sustainability, it might be worthwhile to clarify at the beginning of the paper what you mean, because I was also thinking, oh, is it more environmentally friendly to put satellites in low Earth orbit than to have routers or whatever data centers on the planet Earth? But actually, no, you meant something else. Okay. Kimberly Rushing. Environmental rights. Thank you very much for this paper. Really worthwhile effort. It’s part of a broader project, and I’d love to know a bit more about how that fits in. I think that could be a bit clearer in the paper. You focus on the role of standards authorities. You look at the IETF and the ITU. I was wondering, in your reflections, do you actually think about the normative biases that are built in to these actors? I mean, you mentioned the work that’s been done by other scholars. that try and unpack those. But right now, you’ve gone through the interviews and you’re looking at those quite literally. And I was wondering if you do that. I think there’s also quite a lot of work, maybe not directly just on internet standards organizations, but standardizations bodies as a whole. And I know you come from the literature that’s looking very much at sustainability and standards. But there may be some work. Also, there was quite a lot of work in the 1980s and the early 1990s published in this space. So that might be interesting for you to look at. I was also thinking, you do kind of implicitly look, or no, you explicitly mentioned it in your presentation. You look at environmental rights in the human rights context. And I think that was very interesting as well. One of the things, I know you’re only looking at standards, but another area where there’s been quite a lot of reflection is on the implementation of data centers and the environmental consequences of those. And I was wondering if some of those debates in the literature might not be interesting. So those are my far too long, but hopefully useful comments. I would like to see if there are any questions from the floor. Microphones have been put out. So if you want to raise a question, please go and stand behind the mic. Otherwise, we’ll go back to the presenters for a quick response. Milton is going to the mic. Go ahead.

Audience:
Just a question about the satellite paper. You talked about the creation of these government-run mega constellations and somehow that’s related to fragmentation. By the way, I agree with Jamal that it’s hard to combine the environmental… global commons Tragedy of the commons aspect and the fragmentation aspect of your paper, but I’m going to focus on fragmentation You know what What are they actually doing are they proposing to not allow other? satellites to distribute signals to their country And what is their leverage for doing that and then they’re going to set up their own? What why do they need to set up their own mega constellation to to do that if they’re only concerned about their own territory

Moderator:
If there are no other questions at this moment, we’ll go back to the panelists should we go back in the order You can did you want to mention something?Vagisha

Vagisha Srivastava:
Thank you for the comments Because of lack of time I’m not going to address all of them But I think the voting process that you asked about was interesting me when we were going through the ballot readings and everything also the interviews What we learned about was that a lot of formal language that goes to voting is already agreed upon and the consensus mechanism It’s built pre voting or pre setting the language itself that could be one of the reasons why a lot of these votes are Non conflicting, but it is still interesting to see how Sort of sees react to the browsers react to the process itself Do you have any specific question that you want me to answer?

Berna Akcali Gur:
Yeah, okay, Joanna may add to my comments if she I I think she’s still online. Yes, okay, so Jamal, thank you for your comments and I agree, we are trying to address a few important topics all at one in one paper and I’ll take a look at your recommendations about the EU space policy. I was thinking that the EU space policy and the fact that they are trying to also deploy their own satellite constellation may be a contradictory move because I think there was a paper, there was a EU paper, research paper represented to the parliament saying that the EU doesn’t actually need to own a mega constellation for purposes of access but they still thought that it was important from a strategic and security perspective but I should maybe add that to the paper. And about Milton’s question, so from my understanding of fragmentation, there are different manifestations of fragmentation and one manifestation is through government policy and regulations where the governments try and establish control over infrastructure and the components used for that infrastructure. So decoupling at the 5G infrastructure, for example, was an example of that, whereas the groups of countries have refused to use each other’s technology for cybersecurity reasons and of course there was deeper geopolitical motives behind that as well. So when I look at the government papers justifying investment in these mega constellations. which are elaborate infrastructures, the governments refer to them as sovereign infrastructures that are necessary for cyber sovereignty and cyber security reasons. And so it is from their policy papers that I see that they see these infrastructures, although they are not terrestrially located within the land, control of these infrastructure is still by the companies that are located within their territories. So they are very much seen as a territorial infrastructure from those that can deploy these constellations. So what about the others? For the previous research that we had done, the countries that cannot have their own mega constellations but are planning to use them, see data governance, for example, as their major concern. So, for example, the gateways to the internet, the ground, the gateways to the internet, the ground infrastructures that are need to be, that you need to have every 1000 kilometers. For example, the countries were saying that if we are going to authorize services of these mega constellations, maybe we would like to require them to have a ground station within our territory, even if they don’t need one, even if there is one within 1000 kilometers. And it is to control, the intention is to control cross-border data transfers. And so it is still, the intention is to control the cross-border data transfers and to maintain the control that they already have or extend that control in accordance with their policies that are still developing as the geopolitical tensions intensify. So I hope that was, that answered the question.

Moderator:
I think you have to, Joanna, did you want to add something? Nothing further from me, okay, perfect, Kimberly. Okay, I’ll try to be very fast and say

Kimberley Anastasio:
thank you very much for your comments. Jamal, you mentioned three things that are the things that I’m currently working on, which I think it’s very appropriate as feedback. Yes, I am trying to now situate my study better among studies that deal more with standardization as a whole and not just standardization from an ICT perspective. And also just to explain a little bit further this project, the bulk of the project is based on a methodology that involves the content analysis of the almost 200 standards that have been either approved under discussion or rejected in the two organizations that I am analyzing and interviews with the participants of these organizations, so the ITU members and the IETF members. But in order for me to properly understand the work that these organizations are doing in light of the possibilities for the ICT standardization sector as a whole, I felt it was needed for me to come up with a framework of action, not only from the literature on the environmental impact of ICTs, but also from the perspective of those working on the ground trying to build this agenda in international organizations and spaces like that. So that’s where this smaller project fits the broader one. It is to help me come up with this framework of action through which I’ll then analyze how two particular organizations are engaging in this matter. But thank you very much, and I’ll wait for your further comments on the paper. So thank you, thank you all.

Moderator:
Right, thank you very much for all of the interesting papers, and hopefully. This has, well, I think this has been a great start to the symposium, so thanks very much to all of the speakers and all of the paper writers and everything like that. Thanks a lot. I will now ask you to leave the floor. Should we just leave it here? Yeah? You take it from me. Danielle, could I, I think, given the interest of time, we won’t have a five minute break and we’ll move straight to the second panel. Is that okay with you, Danielle? Yeah, okay, we can have a bathroom break. You will be timed. So if you need a couple of minutes, just use a couple of minutes and then otherwise we’ll get back to you straight away. Okay. Well, I was gonna say you can sit, I’m gonna sit down there. You don’t need me, do you? No, okay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I’m Danielle Flonk. I’m an assistant professor in international relations at Hitosubashi University in Tokyo and I’ll be chairing and discussing this session. Today we have Nanette Levinson who’s presenting on institutional change in cyber governance, Jamie Stewart on women peace and cybersecurity in Southeast Asia, Kamesh and Ghazim Rizvi on making design and utilization of generative AI technologies ethical. Basically everybody gets ten minutes to present after which I will give five minutes of feedback. Nanette? Is Nanette here? Okay, go ahead.

Nanette Levinson:
Yes, can you hear me? Yes, perfect. Thank you. Good morning Kyoto time, good evening my time, good day to whatever time zone one may be on. The papers that were just presented in the first panel set the scene I think beautifully. They were fantastic papers, wonderful discussion. I’m going to share my screen now. I believe that’s working, excellent. I’m going to share with you some work from the past year of a project that I’ve been working on for the last four years. I’ve been researching the United Nations open-ended working group dealing with cybersecurity that began in 2019 to 2021 and the second rendition continues now and actually it is due to go until 2025. And as we all know, this has been a particularly unusual time period punctuated by a pandemic and the war in Ukraine. What I’d like to do in my presentation is focus just on this past year, 2022 to 2023. I’m going to share with you a few of my key research questions, my major findings, several of them, and just very briefly some thoughts on the future research in this arena. I want to highlight three research questions. I’ve been thinking about the field of internet governance for a number of, or at least several decades, and I wanted to have a chance in this paper to take a long-term view, thinking about institutional change using various disciplinary approaches. And I was particularly interested in what could be called deinstitutionalization processes in cyber governance. The organization on which I focus within the discussions at the Open-Ended Working Group is a proposal for something called a program of action, which involves a more regular way to include other stakeholders, stakeholders other than governments, as a part of regular institutional dialogue related to cyber security at the United Nations. In the paper, I formulate a cross-disciplinary approach to these analyses, and I ask the question, how do the findings from this longitudinal study of the Open-Ended Working Group relate to work on institutional change? And further, I ask what possible catalytic factors could be at work related to such changes? And in order to do that, I go back to some work that I did a little bit earlier, where I looked at institutional change indicators, and I want to highlight here here, three of them. First, an indicator for institutional change or incipient institutional change is the absence of an authoritative analogy or the presence of inconsistent isomorphic poles. Second indicator, a change in the legitimacy of an idea and a change in the rhetoric related to it. Third indicator, and these are not sequential, they’re rather a chaotic continuum. The third one is the emergence of a new variety of organizational arrangements consistent with a new idea. And all of this, all of these indicators I look against the setting, the backdrop of increasing uncertainty and turbulence in the environmental setting of the open-ended working group and indeed major geopolitical pulls. Here are a few of my findings at a glance. My earlier work on the open-ended working group from 2019 to 2021 noted the presence of what I called an idea galaxy. And what I mean by that is simply a cluster of specific words that appear near one another. And the subsequent positioning of these words next to or very near to a value or a norm that is already more generally accepted. So in 2019 to 2021, I discovered the following words, human rights, gender, sustainable development or international development or developing country and less frequently non-state actors or multi-stakeholders. And they often were linked both in oral presentations and in written submissions. And I use content analyses on all of these. They were most often linked to sections dealing with capacity building. Interestingly, the 2021. open-ended working group final report, which was adopted by consensus. And again, this echoes some of the discussion about consensus in standards organizations highlighted in the earlier papers. Interestingly, those words were adopted by consensus in that 2021 open-ended working group. But what has occurred in the past year, 2022, 2023, is a fascinating development. The same idea cluster appears in many submissions, many oral presentations, many informal sessions with other stakeholders, but there also appears another opposing cluster or idea galaxy that I term a dueling idea galaxy. Let me say more about this. We remember the idea cluster that was accepted by consensus in 2021. This appears in much of the discussion and was going to appear and did appear in draft versions of the annual progress report that was supposed to be adopted by consensus at the fifth substantive session just a couple of months ago in New York City at the United Nations. However, interestingly, a dueling idea cluster was introduced on the very last day of that discussion in opposition to accepting the report with those words from 2021 in it as a consensus agreement. And instead, the Russian delegation, along with, I guess, the Chinese delegation, Belarus, and maybe four or five other countries, proposed or said that it was not going to go along with consensus, that it strongly wanted and it had a rationale. And I put this in italics that their idea cluster was rewarding such as convention. or treaty. And this really signified their commitment to the development of new norms in the cybersecurity area. And it also signified opposition to this program of action idea as a part of regular institutional dialogue. I do wanna point out that the idea for a treaty was not new in 2022, 2023, it appears throughout discussions. But what is new is its placement in direct opposition to the first idea galaxy above, the one that was adopted by consensus in 21. These dueling clusters reflect the presence of catalytic factors, especially the war in Ukraine, and they provide indications of potential institutional change and increasing turbulence, possibly marking the end of a long cycle of internet governance trajectory that included roles, even though appropriate roles, quote unquote in certain terminology for non-state actor stakeholders. So let me conclude and talk a little bit about future research. The outcome that I just alluded to of the 2022, 2023 discussions in terms of ultimately getting consensus on the annual progress report of the Open-Ended Working Group that was just submitted to the General Assembly, I guess in September, went down to the very last moments of the very last day of that final fifth substantive session. And the only way that the consensus was achieved was by the Open-Ended Working Group Chair, Ambassador Garfour, who took a suspension and went around to do informal negotiations and he solved the dissensus by what he termed in his words, quote, technical adjustments to assure the consensus. And the delegation head termed his technical adjustment as footnote diplomacy. Very quickly, the chair crafted two separate independent footnotes, and I call these balancing the dueling idea on galaxies. Each of the footnotes gave a small amount of recognition to each of those idea clusters and set the stage, of course, for further discussion in 2023 to 2024 open-ended working group ahead. And there are many dates set ahead and discussions related to this topic. So in sum, there are indications of potential institutional change. My project is going to continue to identify any emergent or disappearing idea galaxies in the year ahead. These relate to those conflicting isomorphic poles that I began with as indicators of institutional change. And I hope to be able to use, now that we are primarily post-pandemic, a more mixed methods approach to capture the more individual level idea entrepreneurship in these turbulent times, times that continue to catalyze change processes. And with that, I’m gonna turn the floor back to our chair. Thank you.

Moderator:
Thank you, Nanette. I really like this paper. I’m gonna give feedback now and then we go to the next paper. So I really like this paper because it addresses the big questions in global internet governance. And it looks at recent developments in an important institution, namely the open-ended working group. I have two broader feedback points, one on theory and the second on empirics. So on theory, a number of things, I think, could be further clarified. First, you use a lot of concepts, especially when you set out the different indicators of institutionalization and deinstitutionalization and the stages of institutionalization. So, do you really need all these concepts, such as there was habitualization, objectification, sedimentation? Many of these concepts do not come back in the analysis, and I would only focus on those that you actually need for your analysis, and define more clearly what you mean by them. Second, you use institutionalization and deinstitutionalization processes as a binary, but what about the literature on contested multilateralism or counterinstitutionalization? There authors emphasize competitive regime creation, regime shifting, so there’s more than just making and breaking of an institution. For instance, parallel regime creation, right? Like the Open Ended Working Group was an alternative to the UNGGE. Institutions can gain in relevance, or lose relevance, or even become zombies. So is this binary really maybe too limited? With regard to empirics, the findings address three main categories, emerging technologies, crises, and idea galaxies, but where do these categories come from? Why did you pick these and not others? And how are they theoretically related to institutionalization? I think the section about idea galaxies is the most elaborate one, so it’s clear here which topics you focus on, however, I think you could elaborate more on why you focus on certain ideas and not on others. For instance, you focus on issues such as gender, human rights, sustainable development, but why not on other issues such as democracy and equality? Also I think this empirical section could be a paper of its own, so you could consider focusing the paper on idea galaxies only, and thoroughly setting out your theory and operationalization, and then things like emerging technologies and crises could function maybe more as scope conditions to competing idea galaxies. Thank you. I give the floor to Jamie.

Jamie Stewart:
Hello, everyone. I hope you can hear me well. Yes. Wonderful. Thank you very much. Let me just start my presentation. Thank you all for having me here. And I do deeply apologize for being remote. I was hoping to be there in person but was unable to make it. I’m Jamie. I’m from UNU in Macau. That is the United Nations University. And I’m a senior researcher and team lead there. I’m going to be talking about something that’s quite closely related to the presentation of Nanette. But it is a little bit of a different focus. So it’s infrastructure that would be in the stratosphere. And then finally, with SDG 10, reducing inequality. So it would help to reduce the urban-rural divide and also the gender inequality in use of internet services. Now, in terms of the ITU process, we have the World Radio Communication Conference, which is coming up in November and December of this year in Dubai. There, we’re going to be discussing ways of allowing HIBs to use additional frequency bands. It’s not an opposition to technocentric views of cybersecurity, which have a focus on protection of technical systems and networks. But rather, it’s about extending that focus to go beyond technical systems and think about cybersecurity as ensuring expression and exercise of human rights, particularly around access to information and freedom of the press. And this sort of experiment is a… …described in the slide. This slide shows our company vision. And regardless of which level we look at that at, the national level, the organisational level, even the individual level, these things, that protection should be treated as a mechanism for which we should treat human security and protect human rights. So in this work, this is a piece of research that was done in partnership with the UN Women Regional Data Centre on the Asia Pacific. We centralised the concept of safety and wellbeing and looked at how cyber security practices, particularly within civil society and those who are working in the space of human rights defence, can threaten or disempower users of technology. And so this is also working beyond human factors within cyber security, which indicate to us that people and their behaviours, their thoughts and feelings are important for cyber security practices. That is a component here, but it’s not the central element. The central element is the protection of people and human rights as the function of cyber security. And this is really nicely supported by the Association for Progressive Communications, which have come out with a definition of cyber security that centralise human rights and suggest that cyber security and human rights in and of themselves are complementary and mutually reinforcing and interdependent, and therefore we have to pursue them both together to promote freedom and security. So we can take now the foundation of human-centric cyber security, and then what we did is add a gendered lens on top of that, because what we’re interested in is cyber security as a function of the WPS agenda and how we can support women and girls within the context of peace and security. So as I’ve mentioned already, the cyber security research tends to focus on the technical. And we are interested in taking human factors into cybersecurity, but that is both understanding psychological behavioral factors as they shape cybersecurity, as well as a focus on human rights, harms and safety. Alongside those two critical elements. We also recognize that gender fundamentally shapes cybersecurity. Oops, excuse me. And that is because for a few major reasons. The first is that there are gender differences and access and uses to technologies as well as interactions in online spaces. All of these things influence cybersecurity posture and cyber resilience. We also know from a lot of work that’s been that’s been happening within the genders, gender and violence space online, is that online gender dynamics tend to perpetuate power relationships that prevalent offline. So those masculine masculinized norms and how they influence social relationships replicated in online spaces. And we also recognize that women experience distinct types of online violence, and that these types of online violence are more persuasive for women than they are for me. This is all alongside the gender digital divide which I’ll talk about a little bit more. So what does cybersecurity look like in Southeast Asia. Well, the rapid expansion of digital technologies and internet connectivity within the region as well as the variance in terms of internet connection across different countries and development across different challenges. So what we see is that there are some countries within the region which are highly prepared and doing a lot of a lot of really critical and novel work in terms of governance in this area and others that are not the OHCHR just this year released a report on on cybersecurity within Southeast Asia, and what they found what they said. was that the regulatory instruments that are being developed within the region, where there is a high level of investment in surveillance, and in particular are increasing what they considered to be arbitrary and disproportional restrictions on freedom of expression and privacy. And there were six key issues that they suggested were relevant for the region. And I’m not gonna go through these in a lot of details because there’s quite a bit for me to cover in the presentation. But what I will suggest is that these critical elements spreading of hate speech, coordinated attacks, technology surveillance, and restrictive frameworks, criminalization, and internet shutdowns. I would suggest those who are interested to read this report because it’s very enlightening. And as I said, this is quite aligned to the conversation, the discussion of Nanette, where we talked about broader conversations that might be in opposition to human rights. One of the things that’s come up recently is that the general assembly have expressed concerns over these broader consensus that is thinking that cyber crime and this legislation might be misused against human rights defenders and endanger human rights more generally. So we see this a lot in the recognition of what’s happening with journalists around the world and their freedom of speech. Sorry, I’m running through things relatively quickly because I know I don’t have a lot of time. We focused on women, civil society, and human rights defenders in the region. And this group are very disproportionately affected by cyber attacks. And that is because they’re working with marginalized groups in sensitive politicized topics. They are often not well protected by laws and regulations where they exist. They have little say in those laws and regulations. And sometimes, and we know this from direct case study work, that are actually used, those laws and regulations used to directly. harm them and they face a gender digital divide meaning that they’re less represented within the cyber security field and technical roles and therefore they’re less likely to take that into their protect zone. So we wanted to look at cyber security risks and resilience with the goal of promoting human and digital rights of women and girls in Southeast Asia and what we did was quite a complex project that involved a review of the national and regional context. We did an online survey with those who are employed in civil society organizations advocating for women. We interviewed a whole range of women human rights defenders but specifically those who are working in the space of digital rights as well and then we conducted a cyber audit. I’m not going to be talking about all of this and the report will be launched probably early next year so those of you who are interested can contact me about that. I just wanted to really briefly go over something that I think is of quite a lot of importance. This is not comparing it to other regions around the world. What this is is trends in legislation that are happening within the Southeast Asian region and the types of legislation that the amount of legislation within cyberspace that is happening. So you can see there in terms of the top figure the year there was a large increase 15. This has collapsed across countries of new legislative and regulatory frameworks that came about and there were five in 2022 but this the count here was based on the research. So there was a lot of new legislation that’s happening in this area and some of it looks positive but it may not be necessarily used in the same way for all people. So what we know about these laws in Southeast Asia is that the increasing number of laws and the type of laws that are happening allow for surveillance, search and seizure and there are a whole variety of as a set of case studies around this where there’s targeted monitoring including CCTV cameras, collection of biometric data, the the surveillance of protestors and taking the photos of protestors and using AI. Yep, great. I’ll rush through the end. And the using of those types of technologies in order to target human rights defenders. So we know that all of these, I won’t go through them in detail, have a lot of impact specifically on human rights defenders. Again, I won’t go through this. Basically, what I wanted to say more generally from this data is that there are, as I said, there’s variance in the way that gender equality and internet freedom is enacted across Southeast Asia. And even in places where there are high levels of cybersecurity frameworks, they don’t necessarily function in the same way for women’s ESOs and human rights defenders. So needless to say, in our research, we found that technology was actually at the heart of the work that civil society are engaging in, or like our life or the life of our work, which was suggested by women, and that social media was a critical asset for their functioning, which was also a place where they were directly targeted. They faced a huge variety of cyber threats. And we did do some comparison to say that there was high levels of online harassment, misinformation, cyber bombing, and a huge number of our sample had false information spread about them. We also found that there was less cyber resilience amongst these organizations and human rights defenders than what we then what we would hope that there’s some that half felt prepared could respond and recover. But that also means that half did not. And we really need to what we found in here is that we need to decentralize the constant new things, but actually allow people to use the features of digital digital technology in safe and secure ways. The content and the cyber attacks that were faced by women CSOs and women human rights defenders were highly gendered. And I’ve got some quick case studies here, the photos taken without consent and fabricated the deep fakes and used to dehumanize and discredit human rights defenders, that there was an idea of that the human rights defenders should expect to experience violence online and harassment. There were deep threats and discrediting of feminist movements and silencing and removal of safe spaces for discourse. I really wanted to just focus on the last recommendation before I must finish, which is that aside from some of the organizational level recommendations that we’re putting forward, we also need to ensure and what we’re recommending from this work is that there’s gender responsive means that human centric for recourse against cyber attacks and threats. And this is made particularly difficult within a context where there is the perpetrators of those cyber attacks may likely to be state actors or coordinated attacks that are sponsored and highly or well-funded. So we need to make sure that our frameworks are aligned with this and where we’re endorsing these global frameworks, they take this into account. Thank you very much for everybody for your time.

Moderator:
Thank you, Jamie. I think that’s super important and interesting research and it’s a piece that I could relate to a lot personally, so I really appreciate it. I basically have three feedback points, one on the scope of your concepts, one on actors and one on future steps. So with regard to scope, what do you actually include in your definition of threat in your research? So in your introduction, you speak of cyber attacks. Later, you also talk about digital literacy, misinformation and these are all different kinds. of threats, right? The mechanism about defending yourself against a cyber attack such as doxing or stalking is, I would say, way more direct and immediate than reducing misinformation. So should you not make a categorization of the type of threats and how this impacts marginalized communities? How does the causal mechanism of threat differ here and, by extension, how does this call for different types of regulation? At the same time, how far do you think regulation can actually reach? At some point you made a very interesting point that cybercrime legislation is being misused to target human rights defenders. So I think this is a very relevant and interesting point and I think you can make a similar argument about harassers sometimes weaponizing anti-harassment tools built into digital platforms basically to harass other people or that they pick certain platforms with the most limited options for moderation. So how effective do you think regulation actually is and at the same time what’s the alternative? Then on my second point about actors, it remained unclear to me who the actors in this piece really are. For instance, it would help if you could give some examples of women human rights defenders, women civil society organizations. Also maybe some anecdotes at the start could really help the reader understand what type of cases you’re talking about. A similar thing applies to threats. What actors are we talking about here? Because there’s lone wolves and trolls but there’s also coordinated attacks by groups, maybe political groups. So how does this affect policy recommendations? And then finally on future steps. So currently your recommendations are quite broad and I think you could make it a bit more concrete. For instance, you said that social media is a critical tool for operations but also increases risk exposure. So what instances of risk exposure on social… media did you see, and how would you recommend tackling this issue? Thank you. I give the floor to you guys.

Kazim Rizvi:
I hope I’m audible. Thank you to the chair, thank you to GIGANET and IGF for hosting us today in Kyoto on a very lovely morning and thank you to Kamesh for making it just in time. Unfortunately, he had to miss his flight but he got a new flight today and he made it in time so that’s great to see. So first of all, just very quickly introducing myself, my name is Kazim Rizvi, I’m the founding editor of The Dialogue, we are a tech policy think tank based out of New Delhi, India and we work across multiple issues, one of them is AI and we are really excited to present this paper which is authored by Kamesh along with his colleagues who are in India, most likely they are enjoying their Sunday morning unlike Kamesh and me but I think we are having a better time presenting this paper. So very quickly, we don’t want to waste too much time, so very quickly what is the objective and what are we trying to do here and as you see on the titles, this is basically looking at enabling responsible AI in India and we have come up with some principles and these are principles which we believe need to be implemented at different stages and the uniqueness about this paper is that the principles cut across the development stage of AI, the deployment as well as the usage by various actors and consumers and I think that’s where the uniqueness lies in the paper and that’s what we are trying to do because this has not been discussed in India at least till today and that’s the idea for us to sort of work on this paper. So if we move to the next slide, just sort of going through the outline very quickly and I think in the last year or so, we’ve been accustomed to hearing the word AI a lot more, right, with the rise of generative AI applications, most of you in this room and listening to us online are having a direct interaction. And we see AI proliferating across a lot of different ecosystems, such as technology, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI. And what we see as researchers is that the technology is moving away from just a B to B to a B to C technology, where consumers are directly interfacing with AI models to, you know, help them with their daily tasks, professional duties, et cetera. So, you know, we see a lot of algorithms. In many ways, you know, the term we coined is algorithms are the atom of the Internet, right? You cannot live without them. And they make or they sort of create the structure of the Internet, which is the modern Internet as of today, and the services which are provided. So, for example, you know, I have a cat in my house, and I go on social media, and then I’m seeing multiple options to, you know, buy different type of food and stuff for the cat. And I’m not saying that you can’t do that, right? You know, you can buy a specific kind of food, but if you go to social media and sort of post some pictures, then you’ll be, you know, getting different, different type of suggestions and interventions. So it’s really taking over in terms of giving you ideas, giving you inputs from the music you listen to the kind of, you know, the places you want to visit. It’s really everywhere in our lives today. So while it’s doing a lot of good things, there are certain challenges, right? So, you know, we have a lot of challenges, and I think, you know, as we move to the next slide, what we’ve tried to do here in this paper is understand those challenges, and identify what are the implementing frameworks, governments, scholars, development organizations, multilateral organizations, tech companies have to work towards as well as civil society, right? So what we’ve done is we’ve sort of mapped out a certain specific ways of identifying responsible AI. Maybe we can move to the next slide. Yeah. So in this paper, we’ve mapped impacts and harm, and what we’ve done is we’ve looked at AI at the development stage, so the design and development at the algorithmic development model stage, where we’ve analyzed what are the harms which could take place when you’re designing the technology, when you’re really coding it, when you’re sort of coming up with algorithms, when you’re collecting data, what kind of data you’re collecting, how should you collect the data, what is your authenticity, etc. So that is one stage, which we’ve understood. And the second stage is the harm stage, which is the post-development development deployment stage, when the technology is deployed by industries. It could be, you know, horizontal industries such as finance, education, even environment sustainability, social media, whatever industry is using the technology, there are certain harms present over there. So how do we protect ourselves from those harms? So these are the two stages which we’ve come up with, and again, this is a very unique approach because most of the principles which you see, be it the OECD principles or the UNICEF principles or different multilateral principles or bilateral principles, they’re mostly focusing on the deployment sort of stage, and this is something which we have sort of figured out that design, deployment, and development, all three stages have to be met, and that’s the focus for the paper. If you go to the next slide, it’s pretty much sort of summing up, you know, what we are doing. So three stakeholders, which is the developer, and then you have the deployer, and then the end user, the end population. What are the principles for the end population as well? So, let’s say if you develop a health tech application, there are principles for the technologist, the coders who have designed the application. There are principles for the hospitals, clinics, doctors who are using the technology, and then there are principles when it comes to

Moderator:
how consumers are interfacing with the technology, and how do you protect them as well? So, these are the three stages and the stakeholders. So, then we’ve really mapped these harms across the AI lifecycle, and over here, Kamesh, if you want to quickly come in and talk a little bit about how we’ve done these mapping of different principles and what those principles are. I hope I’m audible. Yeah, I guess I am.

Kamesh Shekar:
So, thank you, Kazim, for setting the context for the paper itself. So, just coming from where you left itself, what the paper is trying to do, and why this is a unique way of looking at things, is basically most of the… Yeah. Basically, most of the times when we are, most of the frameworks which are available outside there is overly concentrated on the risk management which comes at the AI developer level. But what we have went about doing is that, is looked into a 360 degree approach where we wanted to move beyond the developer and ask a question about if at all a developer is designing a technology ethically, does that mean that when a technology is deployed or used, there will not be any fall through the cracks happening? So, just to answer this question is where we have come up with the model of principle-based ecosystem approach. And the model is basically talks about, as Kasim mentioned… is mapping all the principles for various stakeholders who come within the ecosystem itself, such that collectively, we could actually ensure that some of the adverse impacts that we have mapped doesn’t happen. So firstly, what we have done is came up. Firstly, what we did is we took five adverse impacts, and we chose exclusion, false prediction, and copyright infringement, privacy concerns, and information disorder itself. Why these five is basically because these are the most top five aspects which are talked about when it comes to AI implications itself. But this is not an exhaustive list. This is just a start of what we are doing. Then what we went about doing is, as Kazim already mentioned, we tried to look at impact and harm. For us, impact and harm is merely is this that impact is just a construct of a harm which could happen later, and how much you are aware of that. And harm is obviously exposed itself, and the actual harm happens. So our ideology behind the slide itself is this, how the first aspect of our paper itself is that. Whenever we talk about exclusion or any of these adverse impacts, we don’t really look at it from the granular level, where there are different stakeholders involved at different stages of the AI lifecycle itself, contributing at the different levels, which accumulates into something like an exclusion happening. So we went about going and mapping all of those impacts and harms, which occurs at the different stages of the lifecycle of the AI itself. One important aspect here, if you could look into the slide, is we have two important additional stages that we have added, which is the gray, which is your actual operationalization, which is at the deployment level. And then we went about doing. it at the direct usage, which is what Kazim mentioned about B2C implications coming into the picture. So next slide. Yeah. I think we skipped one. Now it’s working. Now it’s working. Now it’s working. Yeah. Now it is working. So yeah. So yeah. This is the one. So now that we know impact and harms are mapped, then what the paper goes about doing is mapping various principles that could be followed by different stakeholders at the different stages of the lifecycle. And here, if you could see, these are some of the principles which have been extracted from globally available frameworks, like your OECD, UN, and EU, and et cetera, and stuff. And also India’s G20 declaration, which also speaks about some of these principles. In addition to that, from our research also, we have suggested some new principles. So after principles, what we go about doing is that is the operationalization. Here the unique aspect that the paper tries to do is that when we talk about human in the loop as a principle, most of the times, we just use the term as passed by. But when it comes to operationalization, that particular principle means differently at different stages of the lifecycle. And that exact difference is what we wanted to bring out from this paper. For example, if you could look at this, at the planning to the build and use stage, human in the loop really means that you want to engage with your stakeholders, and et cetera, and stuff. Whereas for the actual operationalization stage, it could mean that you have to give a human anatomy to people, a subjected human anatomy to people, where they could also take some decisions against whatever the AI decision has been given. So we have brought out such differences into picture within the operationalization. Um, now the, now that impact is done, operationalization, sorry, principles are mapped, operationalization is done. Finally, the paper to just give a holistic approach, we also talk about the implementation which comes from your government here because like our research is extensively in Indian context. So we went about looking at like, you know, what is, um, what can be done by the Indian context in terms of like implementing such a framework where we look at like a domestic coordination, which is important within the legislations. And then international cooperation is important because various like, um, aspects are happening at like different, uh, um, you know, um, institutional level and like jurisdictional level and like bilateral level, et cetera, and stuff. Also like India moving towards a chair of being GPA, I guess like this is like this paper adds a great value in terms of starting that conversation, just one minute. And finally, we also talk about like establishing a public and private collaboration in terms of like how we can implement it. And like, this is something that like as an organization, we keep pushing in terms of like it not necessarily has to be something at the compliance level. It can also come in the level of like, you know, making it like a, you know, value proposition for the businesses to take it.

Moderator:
So I’m going to cut you here, I’m sorry. Thank you so much for your presentation. I think you bring up a very relevant question, um, that addresses a lot of blind spots in current academia. Um, and instead of looking at only developers, you also look at deployers and their role in responsible use of AI. And basically you have three main points of feedback, uh, one on the focus of your argument, one on the narrowing down of concepts, and one on your causal mechanisms. So with the focus of your argument, like I said, instead of looking at developers, you look at deployers. But I wondered why not also focus on, uh, users, like end users? So you think users, like, I don’t think you think that users have no role to play. in responsible and ethical use of AI, right? And especially, since you talk a lot about generative AI, this is often steered by end users. So what is your perception on their role to, like, on their role that they play in the responsible use of AI? Second, on narrowing down concepts, I think often you can make your argument more concrete. So for instance, on page 16, you argue that the AI solutions might be producing an error or may be designed to capture some biased parameters to produce a suggested outcome. However, real-life harms of such outcomes only translate into action when AI deployers blindly use the same for making real-life decisions. So in this case, I was like, OK, but then what do you mean by real-life harms? What do you mean by real-life decisions? What do you mean with AI solutions, you know? So this sometimes gets so broad that it could mean anything. And I think sometimes specifying what you mean would actually help making your argument. So arguments often remain quite abstract. And I think you can make it more concrete by basically defining what do you mean by AI? What do you mean by AI solutions? And just mention a couple of examples. And then finally, on conceptualization and causal mechanisms, we saw the figure, as well, on the AI lifecycle. I had a number of questions about this model. So on a more general level, it kind of remained unclear to me where this model is kind of derived from. Where does this come from? How did you arrive at this model? So I think you need a bit more like, OK, what is already out there? And what do we use to come to this model? Second, you argue that you want to focus on deployers, but the largest part of the model is still developers. So it’s not completely in line with the argument that you’re making in the paper. You say, OK, everybody focus on developers. We focus on deployers. But then in the model. in the AI lifecycle, it’s mostly developers. So what really is the role of deployers in this model? And then finally, I thought it was interesting, there’s this, like the top two categories were like exclusion and false prediction. But there was no impact on end users. And I wondered why. Because I thought like, there’s a lot of impact on end users if we think about exclusion and false predictions, right? So these are my points. I would like to open up the floor if people have questions or comments or… Yes. And then after we collect, we go back to the panel. Anybody else?

Audience:
Yes, so I’m gonna ask you a really tough question, but it’s more an attempt to make a general point about how messed up our dialogue about AI is rather than focusing on you, because I think you’re mapping out there of this ecosystem was actually a pretty interesting contribution and worthwhile. But you open your paper by saying, invoking the invention of the printing press, right? Now, can you use your imagination and try to project for me what would have happened if the authorities and the public in 1452 had decided they were going to regulate printing? And what do you think would have resulted from that?

Moderator:
Do we have any other questions in the room? Because otherwise we can go back to the panel. We can go in reverse order. We have like nine minutes left, so that would be like three minutes max. Sure, so to your question.

Kazim Rizvi:
So I think that’s a good point. So I think that’s a good point. So I think that’s a good point. So in this paper, we haven’t suggested that AI should be regulated, right? What we are saying is that, look, there are certain harms associated with the use of AI, which we have to be careful of. And we have to work towards developing some frameworks and principles around these harms, which we’ve identified. And I think that’s a good point. And I think that’s a good point. So, you know, across the globe, AI is regulated as it is, right? We’ve not taken a stand that, look, you have to sort of come up with very strong regulations to, you know, sort of really bucketed into different kind of, you know, technologies which should or should not be used. But maybe in the next 10, 15 years as the usage grows, we may see that, you know, we may see that, you know, we may see that AI should be regulated as it is. So I think it’s very clear that these are principles which will help in improving the effectiveness of the technology. I mean, the same argument goes for fire. The same argument we can apply for fire as well. So, you know, we may not have seen what we see today. But eventually it was. So the same argument applies to this, that an AI has been around for a few decades now. It’s not like the technology is very new of late. It’s been there for a while. But we are not suggesting that, look, put very strict or very hard regulations to begin with. What we’re suggesting is, look, move slowly, but watch out for harms as they take place. So, you know, I think it’s very clear that, you know, we may see that, you know, we may see that, you know, we may see that AI is not a solution. I mean, look at the industry, look at civil society. are a means to also put that discussion into context that look, we need to move towards more responsible deployment of AI. And what that means, even we don’t know. I mean, we are all studying this. A lot of scholars globally are trying to figure out what is really responsible AI, as much as responsible printing press or responsible use of fire would be.

Moderator:
Just quickly coming in on your… Yeah, very quickly coming in your points.

Kamesh Shekar:
There’s too many things to discuss in whatever you have said. We can take it offline too. On the very first thing on impact population and end users, so basically the paper does that, where it also says that as we as end users and impact population use such technologies, how should we responsibly use it? So there are certain principles and there are certain operationalization things that we talk there. So second thing that on your question on where is this, the life cycle comes from is derived from NIST and OECD and et cetera and stuff. In addition to that, we have added some aspects of our own. And we have also validated how we think that is important within the paper. Third thing about, if you could repeat your third point, or it was something on the exclusion and stuff.

Moderator:
Well, I don’t think we have time. I think we should take it to the break. We can go back to Jamie for like last points and then we can wrap it up, I think. Jamie, are you still there? Yes. Yes, I’m still here. Thank you. And thank you very much, Danielle, for your comments.

Jamie Stewart:
Just to, I will be very brief in these because I obviously don’t have a lot of chance to go over them in detail. And you brought up some really, really good points. Just to say we had a very- comprehensive list of threats that we asked about experiences at both the personal and the organizational level and we also had some open-ended information so people could add more. There is a lot going to be a lot more information about that in the report. You also asked about anecdotes in terms of actors. This is a very sensitive issue and I think you know in terms of diplomacy and what that looks like. We did ask about perpetrators and who they think the perpetrators are and obviously we have to be considered that that is perceptual as I mentioned right at the very beginning and there are a range of state and non-state actors and as I said most of them in terms of the stories were very coordinated and so that is when we’re experiencing but on social media and what those attacks look like obviously they’re sometimes very difficult to trace but we can definitely trace some of the surveillance software as it was used and that’s very relevant to the South Asian context. I did want to very briefly end on first off say that yes your recommendation your point about the recommendations being more concrete is very well taken and we’re working with the civil society right now in order to co-create those more but the last thing and what I wanted to end on was your very important point and I agree with this entirely the misuse of regulation and law and policy against human rights defenders or against journalists or advocates and those who speak out. I think this is an incredibly important point that is very nuanced and the ones that I really want to highlight and pay attention to are those ones that are considered to be anti-terrorism and cyber that the more generic cyber crime laws that really put those who are potentially speaking out in a place where they could be legislated against. And I think that’s something that we need to, we need to very strongly consider when we’re taking kind of more global and international regulatory frameworks, because they can do the opposite. And just having cybersecurity policy in place does not mean that it’s in place in a way that’s protective against human rights. So thank you for bringing that up and that’s well and thank you so much, everybody.

Moderator:
Thanks, Jamie. Thanks everybody who was on this panel and for participating. Also thanks to Nanette, even though she’s no longer here. Yes, I think we can go to the break. Everybody please give a hand to the panelists. And what time are we back? Like what time do we reconvene? We should come back at 12.35. 12.35, 1.35. 1.35. I hope. If we come back at 12.40, I think we can. 1.40. Yeah, yeah. It’s okay, the jet lag. It’s fine. Okay, we come back at 1.40 everybody. And that goes for people online as well. Okay.

Audience

Speech speed

179 words per minute

Speech length

256 words

Speech time

86 secs

Berna Akcali Gur

Speech speed

147 words per minute

Speech length

2264 words

Speech time

921 secs

Jamie Stewart

Speech speed

162 words per minute

Speech length

2664 words

Speech time

984 secs

Kamesh Shekar

Speech speed

180 words per minute

Speech length

1281 words

Speech time

426 secs

Kazim Rizvi

Speech speed

210 words per minute

Speech length

1607 words

Speech time

458 secs

Kimberley Anastasio

Speech speed

171 words per minute

Speech length

2039 words

Speech time

717 secs

Moderator

Speech speed

140 words per minute

Speech length

4886 words

Speech time

2101 secs

Nanette Levinson

Speech speed

141 words per minute

Speech length

1476 words

Speech time

629 secs

Vagisha Srivastava

Speech speed

171 words per minute

Speech length

2617 words

Speech time

916 secs

Yik Chan Chin

Speech speed

157 words per minute

Speech length

2398 words

Speech time

919 secs

Connecting open code with policymakers to development | IGF 2023 WS #500

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Helani Galpaya

Accessing timely and up-to-date data for development objectives presents a significant challenge in developing countries. It can take up to three years to obtain data after a census, leading to outdated and insufficient data. This lag in data availability hampers accurate planning and decision-making as population and migration patterns change over time. Additionally, government-produced datasets are often inaccessible to external actors like civil society and the private sector. This lack of data transparency and inclusivity limits comprehensive and integrated data analysis.

Another issue is the lack of standardisation in metadata across sectors, such as telecom and healthcare, especially in developing countries. This lack of standardisation creates challenges in data handling and cleaning. The absence of interoperability standards in healthcare sectors further complicates data utilisation and analysis.

Cross-border data sharing also faces challenges due to the absence of standards. This absence hinders the secure and efficient exchange of data, hindering international collaboration and partnerships. Developing more standards for cross-border data sharing is crucial for overcoming these challenges.

Working with unstructured data also poses challenges, particularly when it comes to fact-checking. There is a scarcity of credible sources, especially in non-English languages, making it difficult to identify misinformation and disinformation. Access to credible data from government sources and other reliable sources is essential, but often limited.

Efficient policy measures and rules are necessary to govern data usage while preserving privacy. GDPR mandates user consent for sharing personal data, highlighting the importance of differentiating between sharing weather data and personal data based on different levels of privacy violation.

The usage of unstructured data by insurance companies to influence coverage can have negative implications, potentially resulting in unfair risk classification and impacting coverage options. Ensuring fairness and equality in data usage within the insurance industry is crucial.

To address these challenges, building in-house capabilities and utilising open-source communities for government systems is recommended. Sri Lanka’s success in utilising its vibrant open-source community and building in-house capabilities for government architecture exemplifies the benefits of this approach.

The process of data sharing is hindered by the incentives to hoard data, as it is seen as a source of power. The high transaction costs associated with data sharing, due to capacity differences, also pose challenges. However, successful data partnerships that involve a middle broker have proven effective, emphasising the need for sustainable systems and case-by-case incentives for data sharing.

The evolving definition of privacy is an important consideration, as the ability to gather information on individuals has surpassed the need to solely protect their personal data. This calls for a broader understanding of digital rights and privacy protection.

In conclusion, accessing timely and up-to-date data for development objectives is a significant challenge in developing countries. Government-produced datasets are often inaccessible, and there is a lack of standardisation in metadata across sectors. The absence of standards also hampers cross-border data sharing. Working with unstructured data and fact-checking face challenges due to the scarcity of credible sources. Policy measures are necessary to govern data usage while protecting privacy. Building in-house capabilities and utilising open-source communities are recommended for government systems. The government procurement system may need revisions to promote participation from local companies and open-source solutions. Data sharing requires sustainable systems and incentives. The definition of privacy has evolved to encompass broader digital rights and privacy protection.

Audience

During the discussion, the speakers explored various aspects of open source, highlighting its benefits and concerns. One argument suggested incentivising entities to share data as a way to counteract data hoarding for competitive advantage. It was noted that certain organisations hoard data as a strategy to gain a competitive edge, but this practice hampers the accessibility and availability of data for others. Creating incentives for entities to share data, therefore, was emphasised as a vital step in promoting data openness and collaboration.

Conversely, the potential negative effects of open source were also discussed. The speakers raised concerns regarding the need to verify open source code and adhere to procurement laws. They specifically mentioned the French procurement law, expressing apprehensions about the ability to effectively verify open source code and ensure compliance with regulations. These concerns highlight the necessity for thorough scrutiny and robust governance measures when relying on open source solutions.

Building trust in open source was another significant argument put forth. In Nepal, for instance, there was a lack of trust in open source, hindering its widespread adoption across different sectors. The speakers stressed the importance of establishing mechanisms that enable the verification of open source code, ensuring its reliability and security to build trust among stakeholders. They also emphasised the need for capacity building to enhance knowledge and expertise required for verifying and utilising open source code effectively.

Overall, the sentiment surrounding the discussion varied. There was a negative sentiment towards data hoarding as a strategy for competitive advantage due to its restriction of data availability and accessibility. The potential adverse effects of open source, such as the need to verify code and comply with regulations, were also viewed negatively because of the associated challenges. However, there was a neutral sentiment towards building trust in open source and recognising the necessity for capacity building to fully leverage its benefits.

Mike Linksvayer

Mike Linksvayer, the Vice President of developer policy at GitHub, is a strong advocate for the connection between open source technology and policy work. He firmly believes that open source plays a crucial role in making the world a better place by supporting the measurement and information of policy makers about developments in the open source community. Linksvayer expresses enthusiasm about the potential of sharing aggregate data to address privacy concerns. He sees promise in technologies like confidential computing and differential privacy for data privacy and recognises the importance of balancing privacy considerations while still making open source AI models beneficial to society.

Mike Linksvayer emphasises the crucial role of archiving in software preservation and appreciates the contributions of Software Heritage in this field. He highlights the separation of preservation and making data openly available. Linksvayer sees coding as unstructured data and acknowledges the importance of data collection in research on programming trends and cybersecurity. Collaboration in software development is facilitated by platforms like Github, which provide APIs and open all events feed, enabling the sharing of aggregate data. Linksvayer believes that digital public goods, including software, data, and AI models, can be effective tools for development and sovereignty, addressing various Sustainable Development Goals (SDGs).

Promoting and supporting open source initiatives is essential, according to Linksvayer, as they drive job creation and economic growth. He cites a study commissioned by the European Commission estimating that open source contributes between €65 to €95 billion to the EU economy annually. Linksvayer also stresses the importance of cybersecurity in protecting open source code and advocates for coordinated action and investment from stakeholders, including governments.

In summary, Mike Linksvayer’s advocacy for open source technology and its connection to policy work underscores the potential for positive global change. He emphasizes the importance of sharing aggregate data, advancements in data privacy technologies, and the promotion of digital public goods. Linksvayer also highlights the economic benefits of open source and the critical need for investment in cybersecurity.

Cynthia Lo

During the discussion, several key points were highlighted by the speakers. Firstly, Software Heritage was praised for its commendable efforts in software preservation. It was mentioned that the organization is doing an excellent job in this area, but there is consensus that greater investment is needed to further enhance software preservation. This recognition emphasizes the importance of preserving software as an essential component of data preservation.

Another significant point made during the discussion was the support for assembling data into specific aggregated forms based on economies. This approach was positively received, as it provides a large set of data that can be analyzed and utilized more effectively. The availability of aggregated data based on economies allows for better understanding and decision-making in various sectors, such as the public and social sectors. This aligns with SDG 9: Industry, Innovation and Infrastructure, which promotes the development of reliable and sustainable data management practices.

One noteworthy aspect discussed by Cynthia Lo was the need to safeguard user data while ensuring privacy and security. Lo mentioned the Open Terms Archive as a digital public good that records each version of a specific term. This highlights the importance of maintaining data integrity and transparency. The neutral sentiment surrounding this argument suggests a balanced consideration of the potential risks associated with user data and the need to protect user privacy.

Furthermore, the discussion touched upon the role of the private sector in providing secure data while ensuring privacy. Cynthia Lo raised the question of how public and private sectors can collaborate to release wide data sets that guarantee both privacy and data security. This consideration reflects the growing importance of data security in the digital age and the need for collaboration between different stakeholders to address this challenge. SDG 9: Industry, Innovation and Infrastructure is again relevant here, as it aims to promote sustainable development through the improvement of data security practices.

In conclusion, the discussion shed light on various aspects related to data preservation, aggregation of data, user data safeguarding, and the role of the private sector in ensuring data security. The acknowledgement of Software Heritage’s efforts emphasizes the importance of investing in software preservation. The support for assembling data into specific aggregated forms based on economies highlights the potential benefits of such an approach. The focus on safeguarding user data and ensuring privacy demonstrates the need to address this crucial issue. Lastly, the call for collaboration between the public and private sectors to release wide data sets while ensuring data security recognizes the shared responsibility for protecting data in the digital age.

Henri Verdier

In this comprehensive discussion on data, software, and government practices, several significant points are raised. One argument put forth is that valuable data can be found in the private sector, and there is a growing consensus in Europe about the need to promote knowledge and support research. The adoption of the Data Sharing and Access (DSA) policy serves as evidence of this, as it provides a specific mechanism for public research to access private data.

Furthermore, it is argued that certain data should be considered too important to remain private. The example given is understanding the transport industry system, which requires data from various transport modes and is in the interest of everyone. The French government is working on what is called ‘data of general interest’ or ‘Données d’intérêt général’ to address this issue.

The discussion also highlights the importance of data sharing and rejects the idea of waiting for perfect standardization. It is noted that delaying data sharing until perfect standardization and good metadata are achieved would hinder progress. Instead, it is suggested that raw data should be published without waiting for perfection. This approach allows for timely access and utilization of data, with the understanding that standardization and optimization can be addressed subsequently.

The protection of data privacy, consent, and the challenges of anonymizing personal data are emphasized. The European General Data Protection Regulation (GDPR) is mentioned as an example of legal requirements that mandate user consent for personal data handling. It is also noted that anonymization of personal data is not foolproof, and at some point, someone can potentially identify individuals despite anonymization attempts.

Open source software is advocated for government use due to its cost-effectiveness, enhanced security, and contribution to democracy. France has a history of utilizing open source software within the public sector, and there are laws mandating that every software developed or financed by the government must be open source. The benefits of open source software align with the principles of transparency, collaboration, and accessibility.

The discussion also addresses the need for skilled individuals in government roles. It is argued that attracting talented individuals can be achieved through offering a mission and autonomy, rather than relying solely on high salaries. The bureaucratic processes of government organizations are criticized as complex and unappealing to skilled workers, indicating a need for reform to attract and retain talent.

In conclusion, this discussion on data, software, and government practices emphasizes the importance of a collaborative and transparent approach. It highlights the value of data in both the private and public sectors, as well as the need for data sharing, open source software, and data privacy protection. The inclusion of skilled individuals in government roles and the promotion of a substantial mission and autonomy are also seen as essential for effective governance. Ultimately, this comprehensive overview underscores the significance of responsible data and software practices in fostering innovation and safeguarding individual rights.

Session transcript

Cynthia Lo:
us this morning. It’s a little bit early for individuals on site. Today we’re talking on a workshop on connecting open code with policymakers to development. And an agenda that we have here today, we’re going to go through a round of introductions and then overview of connecting open code with policymakers. Then we’ll move directly to our panel discussion and then a Q&A. Feel free to ask questions also to our online participants as well. I’m going to hand it over first to Mike Linksvayer to do an introduction. One second. We have slight technical difficulties. And so one moment while I fix that. I’m going to hand it over

Mike Linksvayer:
to Halani and never mind, we have Mike now. Hey, thanks a lot for resolving those technical difficulties. I’m sorry I can’t be there in person. I’m Mike Linksvayer. I’m the VP of developer policy here at GitHub. Former developer myself who’s now been doing policy work focused on making the world better for developers and helping developers make the world a better place. Kind of open source is a big part of the way that happens. And I’m really excited about measuring it and informing policymakers about what’s going on. And so I’m really excited about this panel. Great. And I’ll pass it over to our speakers here, Halani and Anri.

Helani Galpaya:
Is there a specific question? Just to introduce yourself. Okay. I’m Halani Galpaya. I’m the CEO of LearnAsia. It’s a think tank that works across the Asia-Pacific on broadly infrastructure, regulation and policy challenges, but with a huge focus on digital policy. Thank you.

Henri Verdier:
Hello. Good morning. I’m Henri Verdier, French ambassador for digital affairs. Just to mention that I’m not a career diplomat, I was a French entrepreneur a long time ago and I used to be the state CIO for France.

Cynthia Lo:
Great. Thank you. So we’re going to move directly to our panel talk. And to start, let’s talk a little bit more about challenges from unmet data needs. So let’s start with Helani here. What are some of the challenges that you’ve seen over the years on unmet data needs?

Helani Galpaya:
I mean, from a development perspective, understanding where we are in whatever those development objectives, that’s the starting point of any kind of development. And that’s a problem if there is no data. And particularly when it comes to developing countries, which is where I come from, this is a particular challenge, right? So traditionally we’ve relied on government-produced data sets, take for example the census. Every 10 years it’s supposed to happen. And low levels of digitization has traditionally meant it takes about three years after the census to actually get some data out in many countries, by which time the population has changed, the migration patterns have changed, and so on. But we know now there are obviously lots of other proxy data sets that we can use. But the timeliness is one concept. that we worry about in development because the data is slow to come by, even when it is available. The second unmet need is if you’re outside of government, is the availability of data to actors outside government. And frankly, within government, sometimes the data that’s collected by one department or ministry is not even available to others, right? So there’s a very low level of data access possible within government, and certainly for civil society and private sector outside government to access data. Many governments have signed on to open data, charters and all of those things, but really the data that they put out is sometimes not what most people need. It’s not usually in machine readable format, so you spend enormous amounts of time digitizing it and data-fying it. So these are sort of basic challenges and basically, I mean, from the government point of view, governance and regulation in particular, the oxygen that feeds that engine is data because there’s a huge data asymmetry between the government and the regulators versus the governed entity. Take telecom operators, for example, right? How are they doing? They have a lot more information about their operations than the regulators or the governing party would. So there’s really multiple data challenges that we have, and increasingly, the conversation is that the private sector data can act as a proxy to inform development, but negotiating that and accessing that is particularly hard. So there’s multiple data challenges in developing countries, particularly from our point of view as a research organization sitting outside government and outside private sector.

Henri Verdier:
Thank you for the question. 15 years ago or something like this, government understood. that open government data was very important. And together, we did work a lot to open our data. And then maybe we’d commit later our source code. And we learned some lessons, that those data could create much more value if more people can use it, that it was a matter of transparency, democracy, but also economic development, efficiency, and maybe citizenship. And more and more, we understood that government don’t have the monopoly of general interest. And some very important data are in the private sector. So it’s time, probably, and it’s the moment to start thinking deeply, even philosophically, about the private sector data. In Europe, first, there is a growing consensus that, first, we need to help research and to promote knowledge. There are a lot of topics where we have to know. I can speak about disinformation, some impact of social networks, but also climate change, or some important topics. We need more knowledge. And for example, if you look at the DSA that we did adopt last year, we do organize a specific access to private data for public research. Of course, I know that there are important issues, privacy, intellectual property, sometimes security, because if you share everything, you can allow reverse engineering and hacking, et cetera. But we can fix it. And for example, there is an important field of research regarding confidential computing. You can use the data without taking the data. So this is a growing consensus. And probably, we will have, collectively, this kind of consensus. the international community to make the public research stronger and to organize ourselves to be able to understand important mechanisms. But then there are also other actors that need access to those data. And for France, for example, first we do encourage the private sector to be more responsible. Let’s think, for example, about the transport industry. If you don’t have all the data, you have nothing. If you don’t have buses and taxis and personal cars and motorcycles and metro and train, you don’t understand the system and you cannot take good decisions. And this is in the interest of everyone, the public decision-maker, private actors. Everyone needs a good comprehension of a good knowledge of the system itself. So we do encourage cooperation, sharing the data, et cetera. Then we think that we can go further. Maybe you know the French economy Nobel Prize, Jean Tirole, he did publish a lot about the economy as a common good. And we consider that it’s time to conceive some incentive to make the private sector share some important data. And this year, in the French government, we are starting to work deeply on what we call data of general interest, the Données d’intérêt général. Because as I said, government don’t have the monopoly of general interest. And some data should be considered as too important to be allowed to remain private. Of course, this is complex because we need a legal framework to give a status to this kind of data. But really, we did open the case to create a status for some very important, impactful data and to decide that… And those data has to be open, even if they come from the private sector.

Cynthia Lo:
Perfect. I do also wonder, you mentioned one thing about policy and standards. A lot of metadata has very clear standards in financial markets, in healthcare, for instance. I’m curious to know, are there any unmet needs within the standards of metadata? It’s currently governed quite well, but is there anything, is there a certain standard that isn’t out there that could be?

Helani Galpaya:
I mean, the data we deal with, my data scientists deal with, telecom, mobile telecom network, big data, basically call detail records, for example, that we get from base stations, trillions of call detail records. These are not standardized under any means, right? In fact, the team spent four to six months cleaning up the data, because across, when you get data from four to six telecom operators, they’re actually not standardized, how the numbers are. So they’re not interoperability standards. There are many, many sectors where there isn’t interoperability standards. And of course, some of the coolest stuff that comes out is from unstructured data anyway, like social media data and so on. I think the financial sector has traditionally been well forward in this, but many other sectors haven’t. Health, I think, in developing countries is less developed, interoperability standards. And certainly for cross-border data sharing, this is a fundamental problem, right? Like when you look at taxation data, all of that, there’s a lot more work that needs to be done, I think, particularly when it comes to developing economies.

Henri Verdier:
Yes. I did join the French government 10 years ago to lead the open data policy. The lesson learned is that… If you wait for a perfect standardization and a good metadata, you will never do anything. When I did join the French government, we wanted to index every dataset through an index that was conceived during the Middle Age for the National Archives with 10,000 words. So it was quite impossible to publish a dataset because you had to come back to Philippe Lebel to decide where you did. So I take from the open data movement the idea that you share your raw data as they are and don’t wait. But it doesn’t mean that standards don’t matter. Of course they do. But let’s start by publishing. The second lesson is that maybe the apification process is more important than the indexation on metadata themselves. So first, during maybe five years, you did publish everything. But it was not always very useful, especially for data that has to be refreshed very frequently. And you need the last data and not just a data. So for this, we did then take three years to organize a proper API ecosystem. And again, people told me, then first you have to conceive a good architecture of the API system. I said, no, let’s build API. And then we will optimize the API system. So my lesson is that, and my personal experience, don’t wait for a perfect standardization because you will never accede to this goal. This is a moving target. So don’t wait.

Cynthia Lo:
Thank you. And I think that brings us to our next point quite well. You both highlighted this as well on private sector data for development purposes. And I know Mike also has some thoughts on that. But I’d love to know. You mentioned on private sector data, a lot of times it’s a little unstructured, but that’s interesting because you have, it’s wider. You can take a look and analyze that in an easier way. Tell us a little bit more on that, what has been a surprising find? On private sector data? Yes, and some unstructured data.

Helani Galpaya:
Some unstructured data that we work with include, for let’s say for misinformation and disinformation identification, automatic identification of mis and disinformation that spread across platforms in languages outside of English in particular. And there, I think, well, there’s sort of two types of problems. One is just the low levels of data. So I mean, even assuming you have all the language resources, like a language corpus that is needed to identify this on, you know, natural language processing, you at some point you’re going to need a fact base to check against, right? So there, the unstructured data is, well, structured or unstructured data comes from government resources and maybe other sort of credible sources, right? So you’re dealing with two types of data. To fact check numbers, you’re looking at usually trying to find government, and to fact check other things, you’re looking at reports and so on. And there’s a serious lack of data. So for example, if you look at like the big popular English language models, they are trained on millions of articles. We tried this in Bangladesh and Sri Lanka to fact check. We’re down to 3,000 articles that are credible, you know, sort of data sources that we can use to fact check against unstructured, you know. So we are working with a very, very limited universe of credible data that’s actually out there because there’s very little out there so I think that’s for us the biggest challenge.

Henri Verdier:
Sorry, it’s a very complex question. First I was thinking thatcompletely unstructured data are very rare because usually someone did produce the data and did pay something. So a data set is the answer to one certain question but usually it’s not your question. So they have a structure usually. Of course in the world of Internet of Things and some source and you have more and more quite not structured data but if you do observe we are living in a world of data with purpose so they have a structure. So the question again is to think about interoperability and to build bridges. One other question with unstructured data or with a minimum structure is that if you want to share the data to give them as many as more value as they can have you also have to protect other important securities like again privacy but not just privacy like interoperability and if you don’t really understand what is within the data you are not sure that you are protecting all the securities you have to protect. That’s why I pay more and more attention to the field of research as I said of confidential computing. We have to learn to work with the data to train AI model to ask question. For example let’s in France as you know we have an ancient and very structured social security system. So there is one database the social security with every prescription that every French doctor made during the last 20 years. Can you imagine this? 70 million people, every prescriptions made by a doctor during 20 years. And then we make a statistic archive. So you take 1% and we. Here, of course, you have a lot of knowledge and science. You can discover new drugs because you can discover that, I don’t know, someone that had a lot of head ash at the age of 20 don’t have Alzheimer’s 40 years later. And you can discover a new principle of some drug and a lot of things like this. But you cannot just open this kind of data because this is pure privacy. This is my health and your health. But you can organize a technical strategy to accede to this data without showing them. And if you do this, you can control a bit the people that are using this data. And if they don’t respect some laws or principles, you can disconnect them. So this is probably an important field. So again, I’m not looking for a perfect standardization. But we can organize the ecosystem of how to access to the data, when, why, and give another relation between knowledge and data.

Helani Galpaya:
And I agree with the minister. Some of the solutions are technical. We’ve certainly worked with differential privacy methods when we use call data records to still have the data be as usable to inform policy, but without revealing where an individual might actually be or what that person’s number is and all of that. The other part of the solution, I think, is policy, is to have some kind of governing structure to make sure that we are able to use it and preserving the privacy and having some sort of rules around what that. users, what the data is used for, like in the health care system, that insurance companies cannot use it and then drop, you know, private insurance companies cannot drop coverage, because they have so much more information about a set of users. Even if they’re not individually identifiable, once you’re in an insurance pool, you can identify that this is a much higher risk. So there’s sort of, you know, policy as well as technical solutions there.

Cynthia Lo:
I think on the privacy part, I’m very curious to also hear from Mike on what are your thoughts on privacy and private sector data. I’d love to know your thoughts on that, too, or anything to add.

Mike Linksvayer:
Okay, yeah. Well, I first would say that I should have said in my introduction and shouldn’t assume people know what GitHub is, where I work. It’s the largest platform where software developers from around the world come to develop software collaboratively, a lot of it open source. And there are a lot of themes I’m hearing that you can, I mean, software development is kind of a very specific thing, but I think there are a lot of themes we’ve talked about on structured data, APIs, and privacy that maybe I can paint a little bit of a picture about how it works with data about code developments and that the code that programmers are writing is data itself. And indeed, you could think of it as unstructured data. It’s a text file, but it also, each programming language has its own structure because it needs to be able to parse the individual statements. So it’s really a matter of how much work do you want to do and what are the questions that you have about, for example, software developments. And then APIs is another aspect you can, if you want to crawl all of the code, we call our repositories is where a particular project on GitHub or similar platforms are collaborated on. If you want to crawl all the code in the world, that will take you a long time and be very resource intensive. However, GitHub and similar platforms also make APIs available. So I think that’s another kind of common theme that we can look at how exactly that looks with code where you can both kind of do queries to ask questions about kinds of projects that you’re interested in, or you can kind of try to ingest all of the activity as it comes out because GitHub has a very open kind of everything, all events feed, but that also is extremely expensive to do. So as a, and some researchers who do kind of research around programming trends, I don’t know, cybersecurity, there’s a bunch of different kind of research areas that you can look at GitHub data to do. A lot of them spend a lot of their time kind of gathering data before they can even answer or kind of validate whether they’re asking the right questions. So one approach to that and dealing with privacy is publishing aggregate data that will be helpful for some use cases. And that’s what we’ve done with a new kind of initiative we have at GitHub. We’re calling the innovation graph, which is basically a longitudinal data per country, per economy. roughly country basis at various kinds of activity. And so that, and we did it particularly to inform policy makers and international development practitioners who want to understand, use that data to understand things like digital readiness within their sphere of influence. And to, we were able to, publishing aggregate data, kind of satisfy some of these use cases or at least allow us people to explore the aggregate data to figure out what they want to make an investment in, you know, crawling more. It also sort of neatly deals with the fundamental privacy questions that you don’t want to identify, you know, individuals and things like that. So you can do that by kind of, you know, thresholding a certain number of people have to be doing an activity within a country in order to report aggregate statistics on that. So that covers a lot of different themes I think we’ve heard covered there. And I think there’s a ton of promise in, you know, a range of technologies like confidential computing, differential privacy. And I’m excited about them all because developers are building them and a lot of the research slash the R&D is open source. But simple, I guess I’ll just highlight here that, you know, very simple approach, but kind of very low tech approach of, as a first, you know, step at sharing data can be just sharing aggregate data that doesn’t have any privacy concerns can be, you know, that’s, it’s actually very much kind of to Henri’s point about like sharing data before you do all of the standards work because that will. you might be waiting forever. Also sharing aggregate data is a way to kind of take that first step, share data that’s gonna be useful to a range of stakeholders, and then work on the harder part that might be pending more advanced technology to deal with the harder issues.

Henri Verdier:
Oh yeah, please. A small answer. First, we know and cherish GitHub. When I was a state CIO, France was the second public contributor, government contributor in GitHub. And I don’t know if you know, but by French law, every software that the government develops has to be open source and free software. So open source and freely reusable. And more than this, every time that the government use an algorithm to take a decision, he has to publish the source code, but also to tell to the citizens that we are using algorithms, and to be able to explain in a simple word how it works. So that’s an important and coherent policy. And regarding the structure or unstructured data, what I learned from my open data experience, as I said, the first duty is to share data as they are. And then some people will structure. And if we think again about GitHub, so as I said, we cherish GitHub, but we work a lot within GitHub. And then, for example, in France, I don’t know if you know the Software Heritage Project. So here, some researchers from the INRIA decided to build the biggest possible archive of every software. So taking GitHub, but also some dead forges like the Google one. And they are working harder to structure it now. to be able to track the genesis of a soft. So they are working. But we did allow this because we did publish unstructured software. And then some people can continue. And maybe someone will do better. I don’t know. But we will have a variety of experiences. So my lesson is to separate, first publish, and then structure. And you can have a diversity of attempts to structure if you have a common ground of raw data or software.

Cynthia Lo:
I think you mentioned a really interesting. Sorry, please, Mike.

Mike Linksvayer:
Yeah, I just wanted to add to that. Thanks for cherishing GitHub. I definitely cherish Software Heritage. And really, archiving is almost a third part that is also extremely important and, I think, under-invested in. So I think in the software preservation space, Software Heritage is doing an amazing job. But I think that’s preservation of data is something that can be decoupled from the making available unstructured. But I think it’s extremely important to think about.

Cynthia Lo:
Yeah, absolutely. I think we actually have a slide here as well on the innovation graph that Mike had mentioned. And I also saw in the audience here, we have Malakumar, who helped on the standardized metric research because we wanted to understand exactly what type of data would help and what type of data would public sector or the social sector require. And as you mentioned, we have the API, which is that large set of data that Anri mentioned first. And then now we’ve gathered all of the data sets into specific aggregated data based on economies, just in the pattern that Anri had mentioned. I’m not sure, Mike, if you want to mention anything on there. I think you also may be able to share your screen if you’d like. But also, huge thank you to Malakumar, who led that standardized research. metrics research, who’s joining us online.

Mike Linksvayer:
Yeah, I can share my screen briefly, if it would be useful. I’m not sure what the, if folks will be able to see in the room. So maybe I’ll share and you can tell me whether you can actually see it in a useful way. Okay. Can you see, see anything on the screen? Yes. Yes. Okay, great. I think I’m sharing a window that has the page for France and the innovation graph. So this is just to show that we have a bunch of data on a per economy, on a per economy basis. Some of them are fairly technical. Git pushes is basically code uploads to GitHub and you can see that summer vacation actually happens. And repositories, as I was saying, this is the kind of unit of a project on GitHub and similar platforms have using the same concept. Developers, those are people actually writing the code or in some cases doing design around software project. Organizations, which is kind of a larger unit of organizing projects on GitHub that sometimes correspond to a real world organization, sometimes do not. Programming languages, this can be very useful for thinking about skilling within a country. And licenses are about copyright. And then probably, oh, and then topics are, this is currently very unstructured. Basically maintainers on GitHub can assign keywords to their projects, but this can also be, so it’s kind of very noisy data, but can be. helpful in, you know, really diving into, like, identifying a set of projects that you want to study more. And one thing that I’m excited about, so you can tag with any kind of text. So even going forward, people might tag that, you know, your project is relevant to a particular sustainable development goal. And so you’ll be able to kind of navigate the tags in that way or the topics in that way. And finally, perhaps most interesting and new is this kind of trade flow diagram. You can see economies that France is collaborating with that developers are kind of sending code back and forth. So you see U.S., Germany, Great Britain, Switzerland. It’s unsurprising that those are some of the top ones. You can also combine all the member states. And this is kind of a first release. There’s obviously a lot of other exciting analysis that can be done. The data is actually open in the repository. You can see the data here. And, you know, at the end of the day, data can be extremely boring. This is literally a CSV file. But that boringness is fantastic because it means that, you know, you can use your tool of choice, whether it’s a spreadsheet, a Jupyter notebook, or something fancier to analyze the data. And then I’ll just show really quick the reports that Cynthia and Mala mentioned, worked on. And that kind of really drove our requirements for this project, looking at what kinds of data about software development would actually be useful for international development, public policy and economics practitioners. So did a lot of, you know, discussions with entities that are part of the data development partnership, for example, to help design this. And then I also pulled up Software Heritage because I’m a big, big fan of it. They have a page on here that I can’t find immediately kind of showing all the different. projects that they indexed. But I cherish that, too. So anyway, I’ll stop sharing. If people later have questions about a particular country or metric, happy to share again.

Henri Verdier:
Yes, thank you. Very, very promising. We did agree, apparently, that the best policy is to first publish and think later. But we also have to think and to understand. I observe that we are more and more living in a world of interdependent, free, and open source software. And there are dependencies and security issues. If we don’t understand a bit the very structure of the soft ecosystem we are living in, we have to face important concerns. We can remember log4g, for example. We can observe that sometimes when we discover a security failure, because we don’t know the story of the evolution of the code, the forks, et cetera, we are not able to correct everything because we don’t have a proper vision of the history and the evolution of the code. And probably that’s a very important new frontier. We have to build new tools and new approaches to understand to control this very complex system of softs. Do you agree?

Helani Galpaya:
Yeah, I completely agree. I think Sri Lanka, just one example, has a really vibrant open source community. So this kind of data, if they are using GitHub primarily, could be really interesting to understand the evolution of that community as one thing. But just on the many countries are technology takers and product takers when it comes to e-government systems, so don’t have the luxury of saying everything will be open. They’re buying software from big companies. which will not certainly make the code open, right? Not even APIs, a very close, tight licensed system is what they’re buying. And I think as countries go along that technology maturity road, like Sri Lanka at some point came to the point where there was enough capacity with the CTO, with the government agency who was able to say, okay, we will build some of this in-house, I will use the open source community who’s working around the world to build some of these tools to set up the basic government architecture. But that takes a bit of time, I think, to get to this stage because the easiest thing is to get some donor money and to do a procurement of a closed system. And that’s really problematic, yeah.

Henri Verdier:
Small comment. When I was in charge, the budget for buying software in France was four billion euros a year. Half of it was consumer products, like, I don’t know, Windows. So for this, of course, we cannot negotiate. But half of this, two billion, were proper, yes, back-end system. And here, you can decide by law that in the procurement, the software has to be open. And we are trying to, we tried to do this, and now that’s quite a standard for French procurements.

Cynthia Lo:
I have many thoughts on that, because I’m very curious, we’ve been talking a lot during IGF about digital public goods and how that could be discovered a little bit more. But that is maybe a little bit off course, but maybe think a little bit about that, I think.

Mike Linksvayer:
Well, actually, don’t, if I could interrupt. It’s actually not off course in a way, at least maybe I can tie it in. I think the, and maybe I’ll share my screen again really, really quick. I mean, this might have been something we’re planning to talk about later, but I think it’s a good opportunity to actually. So this that I’m sharing now is the digital public goods registry, which. in parts which digital public goods could be software, could be data, could be AI models, could be a lot of different things, but it’s mostly software. In fact, you can see the breakdown here between software data and content. And you can see that they’re all tagged in relation to a particular SDG. Part of the, I mean, a big part of the motivation here is we’re gonna find and share solutions, you know, to progress on the various SDGs. The same kind of concept can be useful to kind of just basically curation of information about open projects is its own data project in a way, and can be very helpful in not reinventing the wheel, finding that, you know, a government or civil society institution is already, you know, serving a particular need in that software was developed in country A and people in country B can maybe take it and use it or customize it. And so they have a little bit more, I guess, sovereignty or autonomy to use those words that are quite popular now. See, and the way it’s really tied together, I think, is that the, yes, these are tools that can be helpful for development, for SDG attainment, for sovereignty, but it’s also a data projects kind of doing this kind of organization and, you know, which is its own effort. And I’ll stop sharing now.

Cynthia Lo:
No, thank you, Mike. I did also wanna highlight the Open Terms Archive, which I believe is a digital public good incubated with the government of France. Linking back to you mentioned on security. having ways to publicly record every version of a specific term, and I think it does tie in very well with security, and I was a little curious to go to the next slide about our topic on data, privacy and consent, and then also, widely, security. Would love to know some of your thoughts on how to really safeguard all the data that impacts the users. How should public or private sector provide data that is secure and ensures privacy? It is a big question, and there’s no perfect answer, of course, but another way to think about it is if there’s one suggestion for private sector data that are thinking of releasing data sets, if they release a wide set, is there anything they should keep in mind before doing so?

Henri Verdier:
Yes, that’s a very complex question, and there is no silver bullet. In Europe, we started with principles, so the GDPR, which started in France in 1978, decided that regarding personal data, data speaking about you, the consent of the user is needed, so it’s mandatory. So then we had to… You can conceive legal approaches or technological approaches, and for example, I’m very interested in an Indian project, Digital Empowerment and Privacy Architecture, that does organize technically a way to check the consent in a way that tries to be an infrastructure to unleash innovation. This is not a burden, this is an infrastructure for innovation. So you can implement it. on various approaches, and some are better than others, but there is a strong principle there. And for example, just to mention it, there is also a legal controversy between France and Anglo-Saxon countries because we consider personal data as something like your body. You are not the owner of your body. You cannot decide anything regarding your body, and you cannot decide anything regarding your personal data. There are some fundamental rights. In the world of the copyright, this is a different approach, and that’s great. We can extend. But in France, we have strong commitment that you cannot treat personal data as an average data.

Helani Galpaya:
I think that this approach many countries are taking, seeing a difference between sharing weather data, for example, and very different from personal data. I think we talked about it earlier as well. I think what the minister is talking about is sort of the policy legal, and then we talked about some of the technical solutions. And I think at a practical level, there’s private data, but there’s also commercially sensitive data. So our approach, for example, was to say we will not work with one telecom operator’s data because that’s highly commercially sensitive where the base stations are, which direction it’s facing, the power on those base stations, et cetera. We said we’ll go into this sort of kind of data and analytics to understand where people live, where people move. All of that is possible with mobile network data, but we will only do it if we have more than one company contributing data, and then we sort of anonymize at a company level. Like the base stations are not known whether it’s company X or Y. So the more data that you pool, that brings another level of protection on commercially sensitive data in our case, yeah.

Henri Verdier:
Yes, of course. Statistic anonymization can be. useful for some purposes. If you want to make epidemiology for example, if you want to understand where people, population goes in case of natural disaster, if you want even to check if France or Germany did respect more the lockdown during the Covid, do you know that we did respect the lockdown more than Germans? Yes, we learned this through operators data because of course everyone including me would have bet that German would have been more strong. So you can have a very important use of statistic data, but except this approach I think that you can never really anonymize a personal data, the data describing one person. You can delay the name, the age, at some point someone will find you. So if you want to build knowledge regarding one person, someone, here you need other approaches like confidential computing, technological solutions.

Helani Galpaya:
I agree and I think sort of it depends on the situation and what the company is releasing data for, right? I think what we’re saying is at aggregate level there’s a lot of use you can make out of it. You don’t need anything that’s even remotely identifiable, you can talk about groups of people. But I mean Covid was a classic example to understand movement that was good enough. Facebook check-in data was being used in some governments to see where people are, but at some point if you’re looking at an outbreak and then you’re trying to contact trace using data, then that’s a very different level of privacy violation and you need the legal backing to say okay this is a national emergency and I’m now going to actually identify who owns that cell phone because we need to know where that person may have spread, you know, moved and then spread the virus. So it depends on the question you’re asking really, what company data can do and what the safeguards should be.

Cynthia Lo:
Thank you. I also want to make sure, give an opportunity to Mike, if you have any thoughts on safeguards and privacy and consent on private sector data being released.

Mike Linksvayer:
I think really all of the key points have been covered already. So I don’t think I really have anything substantive to, I mean, directly on point to that, but I feel it’s related to another thing that’s happening now that’s kind of related to open data and open code, which is a debate around how open, quote, open source AI has to be. And a lot of, and the reason why there’s a link is because a lot of times data can’t be fully opened for privacy and other reasons. And yet society can still benefit from having some of the outputs of that training often called the model. And so there’s kind of a debate about what kinds of sharing of data that’s being used to train and open AI model makes it open or not. To some extent, this is a very academic debate, but at the same time, it could end up being, you know, reflected in law as, you know, because it’s often recognized that that open source might need special treatment because of its non-proprietary nature, but, you know, it can be, there are kind of a bunch of different ways that you can, for a data corpus that’s used to train an AI model, the raw data is extremely useful, obviously, but there are other things that can be useful at all that can be useful as well. For example, a, you know, a description of the schema of all the data that you’re using so that other people can bring their own data. and replicate the model, if like two parties have access to similar private data sets, then they can be close substitutes for each other. So I think that’s like a burgeoning area that all these issues kind of come back together around.

Henri Verdier:
So Mike, this is not just an academic issue, it’s a question of which data you did use to train the model. First, you are in California, I feel. I have read that one of the important reasons of the screenwriters’ strike was generative AI, because they wanted to be sure that the work will be respected, so it can have a very concrete and important impact. And if we don’t pay attention to this, first we will delete all the international architecture of intellectual property, then we’ll create new disbalance and inequalities, because some big companies will take the profit of every creation of all humankind, because they will take everything, everything we did, dream, write, learn, publish, share, and they will use it to train some big monopolistic models. So from my perspective, this is not just an academic controversy, this is one of the most important topics of those days, and we have to be sure, and we can also think about security issues, security concerns. So the traceability, if I may, of how was this model educated is a very, very important issue, and we don’t have proper answers today, because you can…

Mike Linksvayer:
I agree with you, just to clarify the academic comment, it was exactly what can you call open or not, it’s this thing that’s somewhat academic, but the fundamental issues are of extreme importance, and I… and I really appreciate the, you know, French government’s direction around open source AI. It is extremely important.

Helani Galpaya:
No, I mean, just to say there’s like a million conversations about training data and the problems of using certain data for training. I don’t think this is the forum for it. Women, people of color, developing country people are at the receiving end of decisions made by models made that were trained on data that does not talk about them. So, you know, that’s a whole other field. So I don’t think we need to talk about it. Just to say that completely agree the issues around training data are very real and huge. Another important concern is the definition of privacy itself. Because 10 years ago, to protect my privacy, I just had to protect my personal data and I was protected. Today, I can know a lot about you without knowing anything about you. Because I will educate a model and it will predict something about you. So I cannot protect myself just while protecting my personal data. And not living in the digital world is no longer a safeguard against not being profiled. You can profile me even if I have no email address, no presence online.

Cynthia Lo:
On privacy, I think being able to layer in different data sets and as a result, you have a profile of a person. I think it is fascinating to see different data sets. I think as I’m looking at the time now, I want to move on to our last point on promoting and supporting open code initiatives. Considering all of the topics we talked about with security, safeguards, privacy. What is the best way to really promote open code initiatives and how can member states do so?

Henri Verdier:
So first, there are more and more approaches, and that’s great. So you have a strong European policy, for example. You have a network of open source officers in European governments. You have, I did mention the French law, it was named La Loi pour une République Numérique, so La Loi pour une République Numérique, that imposed to the government to publish everything in open source and fully reusable. We are promoting this is a European foundation for digital commons because we want Europe to take its responsibility and to contribute to finance commons that are important for for freedom and sovereignty and self-determination. So there are a lot of initiatives, but the more I work on this field, the more I observe that financing is not enough, and maybe it’s not the most important part. Really using free software, open source, contributing, allowing your public servant to contribute, paying attention. For example, when we did prepare the DSA, we did quite kill Wikipedia, because we said companies with more than, I don’t remember, 400,000 connection amounts in more than seven European countries has to be a legal representative in every European state. For a big tech company, that’s not very expensive, but for Wikipedia, that’s very expensive. So we need a conviviality, we need a proximity, we need a constant interaction, we need a mutual understanding, and this is maybe the most difficult today.

Helani Galpaya:
I want to add just two things to this, which is, I think, one is capacity. Public sector has very low technical capacity in many of the majority world countries in the developing world, and the expectation of, except for a handful of public sector officials, anyone else being able to contribute code, I mean, it’s a dream for many countries, so maybe what we need are, and that’s great if you can do that, and that’s kind of the aspirational stage you want to be. So instead of that, another solution is to build the communities, because the private sector is a lot more evolved and highly skilled, right? So it’s like, I keep going back to Sri Lanka, you know, the really vibrant open source community, highest number of contributions to Apache, for example, right? So that comes from Sri Lanka, that comes from, you know, being in high-paid export-oriented software companies, but a couple of people really getting this community together to create this. So how can they participate in government-related stuff? I think that needs two things. One is that community building, but they can’t participate in government procurement. That’s really hard. Government procurement is a system that puts out a bid and gives points to a company that has done this ten times before in five reference countries, right? A group of people who come together who don’t have that references, it’s very hard to signal that they can do this. So I think there’s some problem there. Then at a practical level, I think if you want to maybe, you know, not go all out, but at least give some preference for open source, some governments, what they do is, you know, out of a hundred, allocate five to ten extra points, which you get as a bonus if you are proposing an open system. And there’s variations on open, completely open, you know, free and open, you know, open, you know, open AIs, et cetera. So, you know, a gradated set of systems marks in the procurement. So different types of companies can at least have a hope of participating and competing against the large firms. This is kind of the same strategy that governments in the South have used to promote local companies when it comes to government procurement of IT systems. It’s very hard to compete with, you know, I mean, I, for example, purchased on when I was in government pension systems, right? A big company will come and say, I’ve done pension systems in five of these companies, five countries. It’s very hard for a local company. So then we say, well, if you at least have a local partner in the first year for technical support, in the second year for actual deployment, you get five marks. So the same way, you can build up this sort of legacy of open source by allocating marks over time in procurement systems.

Henri Verdier:
It totally takes a point. And that’s interesting, because if you do observe the story of governments, they had technical skills to build bridges, roads, railways. And there is something different in the history of IT. Maybe because the story started in the military era, as you know, with projects to launch rockets from a submarine. And it was, from the beginning, very big procurement, very expensive, with very bizarre rules of conducting projects. And governments should learn to work with ecosystems, as you say, to be maybe a bit more humble, to learn about agile methodology, to agree to start with an imperfect project and to improve it, to have a constant improvement policy. So this is a cultural change. And just to finish, maybe it will be time to conclude. That’s why, from my perspective, there is a strong connection between. open source movements, open government movements, because you need to learn humility, to be an actor within a network of actors, and a state modernization, and maybe the new democracy that we need with collective intelligence, citizen engagement, participation, contribution, et cetera. You cannot work just on one of the three topics. You need to cross the three topics.

Cynthia Lo:
Perfect, thank you. And I think looking at the time, we are almost at time, but before we go to Q&A, I want to make sure, Mike, if you have any thoughts as well on this topic of promoting and supporting open code initiatives.

Mike Linksvayer:
Sure, first, I mean, everything already said has been great, and I have too many thoughts, but I’ll just say one thing. I think what doesn’t get measured doesn’t get paid attention to. It’s fantastic that we have free and open source software advocates within government now, but a much broader set of policymakers need to appreciate the role that open source plays in the economy and development, et cetera, and that’s kind of one of the motivations of the innovation graph that we launched, that we want to, if you want to see numbers that are kind of tuned to your jurisdiction, then you can look at those even if you don’t have a fundamental appreciation of open source and understand that it’s a really big driver of jobs, economic growth. People have used GitHub data to show that more, including policies that support, that foster open source leads to more startup formation, more jobs, and things like this. There’s a really important study from the, or commissioned by the European. Commission, I guess, several years ago kind of putting a floor on the contribution of open source to the EU economy of, I believe, the range was like 65 to 95 billion euro a year. So quite significant and would love to see that replicated in, you know, in other jurisdictions in a way that’s very legible to policymakers who don’t know anything about, don’t have any affinity for open source, don’t know anything about technology necessarily. So I think those making it legible is super important.

Cynthia Lo:
Thank you, Mike. And I think before we move to our Q&A, particularly open source in the social sector, there’s a lot of organizations that work in the social sector that are also open source. We mentioned Digital Pop Goods and there’s also research in India, Kenya and Mexico taking a look at what were the drivers for social sector open source organizations. How are they funded? What are their initiatives as well that I think in another section we can explore more on open source in the social sector? OK, and I believe also Malakumar was instrumental in leading that research. As we move to our Q&A section here, opening the floor up to anybody who has any questions here in person, please.

Audience:
Hi, good morning, everyone. My name is Sumanna Shrestha, I’m a parliamentarian from Nepal, and I was very curious to attend this because I have a lot of questions. So the first one is how do you incentivize these entities to actually share data? When you think about different sectors that exist to improve the society, you’ve got private sector, obviously. you’ve got government, and you’ve got a very influential INGOs and UN that work. So how, what are some of the ideas, what has worked maybe in Sri Lanka or other parts of the world to incentivize these different actors to actually share data in whatever format, right? Whatever privacy setting format. The reason I ask that is one of the things in my previous life before parliamentarian, what I’ve seen is there is a massive incentive to hoard the data and then come up with insights to then present and say, okay, I have some advantage over everybody else that then sort of warrants funding for me to go out and do something. It could be going and distributing relief material when there are earthquake or disasters, for example. Right, that’s one. And then it’ll be really great to understand a bit more on this French procurement law that you mentioned that you require a certain percentage to be open source. How did you, in Nepal we have a very big distrust, mistrust towards anything that’s open. They think anything that’s free is not good quality, et cetera. So we tend to procure, and you’re smiling maybe because we see the same problem. That’s exactly the contrary. If you have a closed system, you don’t know if there are back doors. Right, so I think maybe also, I understand, but how did you go about building that level of trust in open sources if you’ve seen, if there was something fundamental you did? I think it also maybe pertains to the capacity, right? How many people can actually have the capacity in Nepal to go check the open source code and then see if there are back doors? So what are some of the in-built assumptions that you have? and what are the maybe very focused attention that you paid to strengthen those pillars to then bring this level of trust in open source? I think let’s start with that.

Helani Galpaya:
Okay, I mean I’ll go on the data part I think. Sort of the superficial answer is it’s actually very difficult to get the incentives right for data sharing, right? Data is power and therefore the incentives are to hoard it whether you use it or not actually, that’s the interesting part. So we’ve spent the past year looking at public private data partnerships across Africa, Asia, Latin America, Middle East and the Caribbean and mapped like over 900 different partnerships around data and done some in-depth case studies and we see a couple of things. One is that data sharing is a really high transaction cost activity, right? Because capacities are different, particularly if you’re dealing with a large company and trying to get some data, you don’t even know who to reach because there are regional managers, marketing managers, somebody in San Francisco, et cetera, et cetera, right? So it’s high transaction cost and what that does is it privileges the really large companies because they can come negotiate with the government, spend the money and they can also enter a market and subsidize something with data with a very long-term view. So I mean Microsoft, for example, is a case in point where they can go and do something in a country that’s in the early stages of development, digitization because in 10 years when everyone gets a computer, you know that operating system is more likely to be a Microsoft one. those kinds of investments for the long-term in data partnerships, right? Many small ones don’t. So partnership building, this is why I said the easy answer is it’s difficult, because partnership building around data are really difficult. So the incentives have to be set up. So we talk often about this incentive of, you know, you can get data from Uber, if it is in Nepal, but I’ll talk about Sri Lanka, that has some percentage market share. Uber can give it to government or civil society to understand, you know, where people are or something. But actually, if you now combine with two other local taxi companies and share the data back with Uber and everybody in a non-commercially sensitive way, it’s now suddenly much more useful to Uber, it’s useful to the local person, it’s useful to the transport planning person in government as well. So you kind of find the incentive system that makes it worthwhile for the large and the small operators to come and play. And then you set up the technical infrastructure for data sharing, of course, right? And you give them the kind of confidence that says we are not going to share sensitive data, you know, like in the telecom example I gave. You also then put the legislation around it for telecom data in particular. We really had to sort of make sure that the telecom regulators didn’t have a problem. So you need sort of research exceptions or public policy or journalistic exceptions in data sharing, particularly if it comes to sensitive data. So, I mean, bridging those transaction costs and getting the incentives right, those are the broad principles, but really finding the incentives are a case-by-case basis. So we find the successful ones are often where like a middle broker is involved in getting these data partnerships going, right? Somebody who can convene multiple people. So a classic example would be in India, UN had a now defunct, but a Pulse Lab Jakarta, the UN data governance system sort of, you know. They would sit in the middle and convince government that they need to play in this data game, that they need to use private sector data. They develop that capacity, because government doesn’t automatically say, I’ll use private sector data, right? And sometimes governments can’t say that either, because the census department often has a rule that thou shalt conduct national surveys, not use call detail records for population projection, right? So they don’t give up, so work with government. Then bring like five different private sector players together. Sometimes it involves paying for the data, sometimes it’s setting up the incentive systems. The Global Partnership for Sustainable Development Data in Africa brought together the Group on Earth Observations, which allowed satellite data about Africa, like as a block, and to any country who wanted it, they made it available. So data brokerage also plays a role. I’m not saying government can’t be a data broker, but that role of a data broker is really important, because otherwise what you have is one-off data transactions, right? I mean, during COVID, everyone managed to get some Facebook data to understand where people were. That’s not really useful, because now COVID is over, none of those data is flowing anymore to government or to civil society. So to set it up in a sustainable way that you can understand development and use that data requires a bit more.

Henri Verdier:
Thank you for your very precise and important questions. First, as you said, most people of power has quite an instinct of hiding the data. But this is the old approach. First, obviously this is not the best global organization, as you can easily see in the bureaucracy, for example. When I did join the French government 10 years ago to lead the open data policy, sometimes four different administrations were sharing the same data with mistakes, and they did spend a lot of time and money to… to sell data between administrations of the same government. So it was unuseful, expensive, long. I discovered, because it was expensive, sometimes some administration did use very old data sets because they did buy it just every four years, for example, to the neighbor and with the same money because we are one state. So this is not the best global organization and maybe this is not the best strategy. What I’ve learned from the digital economy story is that platform strategies are better. If you have data, you share this and you become the center of the ecosystem and you have more influence, maybe less direct power but much more soft power. And the story of, I don’t know, Microsoft, Google, Amazon, is a story of people sharing their data, not of people hiding their data. So first, yes, this is a natural instinct but we have to fight it because this is a stupid strategy to hide your data. Then, regarding the controversies regarding open source, yes, in France, we usually consider that open source is the best security approach because you can check, you can contribute, so if you discover something, you can fix it. That’s funny because, for example, if you observe the story of European countries, now everything is converging but 20 years ago, the French public sector did use a lot of open source and free software and not the private sector and in Germany, it was the contrary. The German companies did use a lot of free software and not the German government. So you have also national histories, of course. It depends of your… But in France, probably, it’s also political. Most public decision-makers consider that open source is less expensive. And if it’s not, because sometimes it has cost, of course. But you will spend your money to pay national workers, not benefits in Seattle. So that’s a better use of your public money, because you create value in your country. And usually it’s less expensive. A better security, and maybe a better democracy. You know, in the Declaration of Human Rights, in 1789, we say that the government has to be accountable, that every citizen has a right to understand what the government is doing, and to check if this is the most efficient approach. So now, most of the governmental actions are made through big and complex systems. If you don’t have the right to understand the black box, you are not a perfect democracy. And you have to rely on someone that pretends to make the best, but you don’t know. So the mix of cost, security, and democracy makes that in front, this is not a controversy anymore. Most people in the public sector encourage this approach. If you need a strategy, you did ask for. The first easy step is about public procurement. I’m not speaking about buying software. I’m speaking about buying services. I remember 10 years ago, the city of Paris wanted a network of self-driving cars. But they did write in the procurement, and I will access to every data, and I will share it in open data. And the companies didn’t want to. But they said, that’s my market, my procurement. If you don’t accept, I will take another solution. So for water, for transport, for when you buy a service, or you delegate a public service, just think about writing one clause saying. And I will take the data, and I will share the data. That’s not so difficult if you have a competitive market. The second thing, of course, is to explain, to exchange, to build an ecosystem. Yes, to be frank, I don’t think that those strategy can be done if you don’t have any ecosystem. It can be an ecosystem of open source software. It can be an ecosystem of startup or a big tech company. I don’t care, but you need to work with the civil society or private sector. You need to work with outside of the government. If you cannot rely on some skills and competencies and energy and innovation and creativity, that’s very difficult. And regarding the La Loi pour une République Numérique, so to be precise, we wrote that every software that the government develop or we pay for development has to be open source. It was built on the premises of the law for free access to information. During the 70s, we decided, so we wrote that the citizen has a right to ask for every information regarding government action. So how did you pay? Where did the money go? And we did build on these premises. So of course, when we buy, as I said, a consumer product, we don’t ask for open source. But when we finance the development of the product or when we develop ourself, this is mandatory. Regarding the competencies, as you said, this is very often a problem. But you don’t really need very, very, very skilled people, because we are speaking about a simple IT. And sometimes, for example, just a funny story. Ten years ago, I did create also the job of chief data officer for the French government. And I did hire a great data scientist to fix and to build good public policies. And I did hire brilliant people, and we did help maybe 100 administrations to improve some public policies. And after four years, they went to me and they told me, this job is a bit boring. We did just use Excel software and linear regression. Because government has very structured data and very simple questions. You don’t need to make a generative AI on a big data with a big… You don’t need this to fix 80% of the problems. If you have simple people with simple software, but very focused to have an impact. And very often, we did build, for example, in France, the French ID system, France Connect. Which is used now by 40 million people every week. We are a small country regarding to India. So 40 million people is something in France. I did build it with six developers in six months. The global price was 600,000 euros. Of course, if I had decided to buy it to some big companies that you can imagine, it would have cost, I don’t know, 30 million euros. But when you do it yourself with simple principles, with this agile methodology I did mention. So make a first minimum viable product and then improve it. That’s not so expensive and you don’t need a Nobel Prize, if I may. You just need good and serious developers. And maybe one last thing. I was there when we decided this law regarding… So some people had concerns. So we decided to mention a cybersecurity exception. So if the cybersecurity agency say that publishing the code is dangerous, we won’t. It was five years ago. It did never happen. They did never find a software publishing the code was dangerous. So it was a security to make people comfortable, and it was never useful.

Helani Galpaya:
Let me just make a quick thing. I think this is quite amazing. Just one little challenge, depending on the structure of your civil service, is to attract people with skill to do this kind of development. You need to look at what other options you have. And particularly in South Asia, they can work for a global IT firm, usually for five to 10 times the government’s salary. And that’s a real incentive problem. So the way some countries deal with it is to have these other structures, like a government-owned private company that does a lot of this IT development, who don’t have to abide by government pay scales. And that then suddenly makes it attractive, somebody who wants to do civic tech, public technology, but also isn’t compromising and making low government salary.

Henri Verdier:
If I can say something, because that’s very important. So most of the people that went to work with me did divide their salary by two. But you can have very skilled and dedicated people if you give them a mission and autonomy. But if you ask them to divide their salary and to obey to a big hierarchy chain and to respect a stupid and a very complex framework, so you have to give them a mission, a real mission. Let’s fight unemployment. Let’s educate. Let’s end kind of autonomy. And that’s why we have to change the way we do organize bureaucracy. But that’s not impossible. And actually, a lot of countries did it. And more and more, I feel. And always with people coming from the private sector. Private is a big, important open source ecosystem. It can be also Wikipedia, GitHub, OpenStreetMap. In France, we work a lot with the OpenStreetMap community. Linux, Debian. It’s not always private firms. But that’s outside of the government.

Cynthia Lo:
Thank you. And taking a look on our virtual attendees, we have some questions on whether there are government tools regarding securing data. Just double check. And potentially, I think let’s start with that first. If there’s any thoughts on that. If not, we do have another question as well.

Mike Linksvayer:
I have a small comment on that that might not be directly addressing it. But I just want to highlight how important basically cybersecurity is for protecting data. If you have a breach to an exploit, then your data is exposed, no matter what other measures you have taken. And I want to kind of tie that back into the previous discussion. I think the idea that open source is more secure because everybody can audit it and see exploits and fix them is sort of true. But also a little bit of a double-edged sword and can actually be useful and is very pertinent in policy conversations now. Because one analogy is that open source is free, but it’s also like a free puppy that you have to take care of. And due to incidents like Log4J, I think the attention of policymakers has been focused that open source is part of our societal infrastructure. And it’s something that we can’t only rely on the developers of individual projects to adequately secure. So there needs to be kind of investment from a bunch of stakeholders, including governments in making sure that that ability to for everybody to review the code and make fixes that actually acted on. And Germany is really a leader in this with the sovereign tech funds, but there are others in the U.S. Open Technology Fund and kind of others brewing. But I think that’s a really important point that potential for open source being more secure actually needs to be actioned and needs coordinated action. And I think in sort of another way that this kind of loops back on itself is that those decisions about where to invest, what open source code is actually no critical for power plants, for elections or whatever, you actually need data to be able to identify where you make those investments. Otherwise you’re boiling the ocean. So it’s really tangential, but it just basic cybersecurity is just absolutely crucial for protecting data.

Henri Verdier:
You’re completely right. Open source creates a possibility to check, but someone has to do it. I have another funny experience. In France, we had interesting free bureautic suit, so Word, Excel, it was named Framasoft. And during the COVID, the Ministry of Education decided and said publicly. I will use Phramasoft. And the people from Phramasoft did yell and contest this, and they said, are you crazy? Are you really considering to put 1 million teachers and 10 million students on my infrastructure without giving me anything? But I will die. You have to finance infrastructure, servers, or you will kill me. And that was funny because it could have been seen as a big victory. That’s the French Ministry of Education, one of the biggest international administrations, bigger than the Red Army. So it could have been seen as a victory, but it was the kiss of death. And so we have to be serious and to nurture and protect and finance this ecosystem, or we will kill it. There is no such thing as a free software, free lunch. Someone has to pay a bit.

Cynthia Lo:
Thank you. I know we are at time, but I want to double check to see if anybody has any questions in the audience here or online. All right. Well, thank you so much, everybody, for attending. Any concluding thoughts from our speakers here? Nope, not a problem. Well, thank you so much for everybody to attend this very early morning in Japan session. And we look forward to any other thoughts that you have on OpenCode on development. Thank you.

Audience

Speech speed

176 words per minute

Speech length

440 words

Speech time

150 secs

Cynthia Lo

Speech speed

157 words per minute

Speech length

1275 words

Speech time

488 secs

Helani Galpaya

Speech speed

177 words per minute

Speech length

3638 words

Speech time

1234 secs

Henri Verdier

Speech speed

155 words per minute

Speech length

5032 words

Speech time

1950 secs

Mike Linksvayer

Speech speed

160 words per minute

Speech length

3113 words

Speech time

1165 secs

Telegram Bot Test Test

 Text, Computer Hardware, Electronics, Hardware
Telegram Bot Test Test 2

EU opens antitrust probe into Meta’s WhatsApp AI rollout

Regulators say WhatsApp's AI rollout raises competition issues across the sector.

Uzbekistan sets principles for responsible AI

New national standards for AI development and deployment have been implemented by Uzbekistan, focusi…

UNESCO launches AI guidelines for courts and tribunals

AI misuse in the UK High Court highlights the need for standards; UNESCO’s new principles emphasise …

FCA launches AI Live Testing for UK financial firms

The initiative complements the FCA’s Supercharged Sandbox, guiding firms through responsible deploym…

Texas makes historic investment with $5 million Bitcoin purchase

The state’s crypto investment acts as a placeholder while a permanent custodian is finalised, signal…

Serbia sees wider coverage as Yettel activates 5G

5G deployment marks next step for Yettel in Serbia.

AI model boosts accuracy in ranking harmful genetic variants

The model identified novel disease genes, offering clinicians clearer guidance in complex unsolved c…

Meta expands global push against online scam networks

A broader security strategy of Meta demonstrates where ministers increasingly focus their attention …

New findings reveal untrained AI can mirror human brain responses

Johns Hopkins says brain-inspired AI can mimic neural activity before training.

YouTube criticises Australia’s new youth social-media restrictions

YouTube has criticised Australia’s imminent under-16 social-media ban, arguing the restrictions will…

Governments urged to build learning systems for the AI era

Researchers warn that slow adaptation leaves public institutions vulnerable in a fast moving AI envi…

Japan plans large scale investment to boost AI capability

Officials aim to boost usage through investment incentives, new infrastructure and updated risk mana…

Mistral AI unveils new open models with broader capabilities

The latest Mistral 3 family introduces new multilingual and multimodal models that aim to improve ef…

Public backlash grows as Coupang faces scrutiny over massive data leak

Coupang's data breach exposes deep concerns about South Korea's digital-security standards.

AI helps detect congenital heart defects in unborn babies

A study of 200 fetal ultrasounds across 11 centres showed AI improves detection rates, efficiency, a…

NVIDIA platform lifts leading MoE models

The rack-scale design by NVIDIA accelerates MoE models by boosting communication among experts, offe…

AWS launches frontier agents to boost software development

Teams benefit from less repetitive work, faster development cycles, and stronger security, transform…

Honolulu in the US pushes for transparency in government AI use

Rising concerns in the US city of Honolulu reflect growing public demand for transparency as artific…

Amazon rolls out Trainium3 AI chip to challenge Nvidia’s dominance

Amazon has introduced its latest AI training chip, Trainium3, claiming big performance gains and low…

UK ministers advance energy plans for AI expansion

Recent discussions on national AI strategy deepened after UK ministers assessed how improved grid ac…

EU states strike deal on chat-scanning law

A long-awaited breakthrough on the EU’s child protection law has reopened a fierce debate over how f…

OpenAI faced questions after ChatGPT surfaced app prompts for paid users

A Peloton suggestion in ChatGPT sparked debate about intrusive app discovery tests.

Regulators question transparency after Mixpanel data leak

Security experts say the Mixpanel case highlights longstanding weaknesses in behavioural-tracking sy…

Irish regulator probes an investigation into TikTok and LinkedIn

Authorities in Ireland have begun investigations into TikTok and LinkedIn to assess whether reportin…

OpenAI expands investment in mental health safety research

The new programme by OpenAI supports external researchers working on culturally grounded mental heal…

AI growth threatens millions of jobs across Asia

China and Singapore are reaping benefits, while poorer nations struggle with basic digital access an…

eSafety highlights risks in connected vehicle technology

Reports from frontline workers show that smart vehicle connectivity can create vulnerabilities, prom…

Quantum money meets Bitcoin: Building unforgeable digital currency

A new battle for the digital throne is emerging as quantum money moves from theory to possibility, r…

Gemini Projects feature appears in Google app teardown

Early Gemini Projects findings show a structured way to manage multi-step tasks within Google's AI i…

Greek businesses urged to accelerate AI adoption

Executives from Google Cloud emphasise that AI uptake in Greece must accelerate, with financial and …

EU moves forward with Bulgaria payment review

Bulgaria’s third payment request received partial clearance as the Commission found 48 milestones fu…

UK moves to give crypto full legal property status

Industry groups welcomed the UK’s recognition of crypto as personal property, saying the change offe…

Bank of America advises clients to invest in crypto

Starting January, clients can invest in four of the largest bitcoin ETFs, with total assets exceedin…

V3.2 models signal renewed DeepSeek momentum

The new DeepSeek releases aim to close gaps with GPT-5 and Gemini in advanced reasoning tasks.

Thrive Holdings deepens AI collaboration with OpenAI for business transformation

Partnership with Thrive Holdings advances OpenAI's enterprise AI strategy.

SIM-binding mandate forces changes to WhatsApp use in India

Frequent SIM checks and six-hour logouts will change daily WhatsApp use across India.

Dublin startup raises US$2.5 m to protect AI data with encryption

Mirror Security, a UCD spin-out, has closed a US $2.5 million pre-seed round to roll out fully homom…

Apple support scam targets users with real tickets

Scammers create professional-looking emails and calls, making careful verification and multi-layered…

Australia stands firm on under 16 social media ban

Two teenagers challenged the law in the High Court, citing rights concerns.

Safran and UAE institute join forces on AI geospatial intelligence

Advanced geospatial reasoning in Safran’s agentic AI systems will allow operators to respond to situ…

Valentino faces backlash over AI-generated handbag campaign

Social-media users have condemned Valentino’s AI-generated campaign for its DeVain handbag, calling …

Jorja Smith’s label challenges ‘AI clone’ vocals on viral track

A dispute has erupted after Jorja Smith’s label accused the creators of a viral track of using AI to…

Amar Subramanya takes over Apple AI as Giannandrea steps aside

The shift places Amar Subramanya at the centre of Apple's AI model strategy as the company seeks fas…

Data centre power demand set to triple by 2035

Planned sites are growing larger, with many expected to surpass 500 megawatts.

When AI use turns dangerous for diplomats

Shadow AI is quietly reshaping diplomatic work in ways that threaten confidentiality, weaken strateg…

Singapore and the EU advance their digital partnership

Brussels hosted the second EU–Singapore Digital Partnership Council, confirming shared priorities in…

Italy secures new EU support for growth and reform

A new EU assessment confirms Italy’s progress, enabling a further payment under the Recovery and Res…

Poetic prompts reveal gaps in AI safety, according to study

Findings indicate that creative phrasing can undermine AI filtering methods, with several prominent …

Australia launches national AI plan to drive innovation

Investments in digital skills, research infrastructure and global partnerships underpin Australia’s …

Philips launches AI-powered spectral CT system

Spectral CT with AI enables clinicians to differentiate tissues more accurately, supporting higher d…

Accenture and OpenAI expand AI adoption worldwide

Collaboration builds on OpenAI’s work with major enterprises, combining breakthrough AI technologies…

Most German researchers now use AI

AI is increasingly seen as a co-creator or manager in research, extending beyond automation to resha…

Europol backs major takedown of Cryptomixer in Switzerland

Cryptomixer’s shutdown disrupts a key system used by ransomware groups and dark web markets to hide …

Fake AI product photos spark concerns for online retailers

Legal experts warn that using AI to claim refunds may constitute fraud, urging platforms to enforce …

NVIDIA and Synopsys shape a new era in engineering

Engineers gain new opportunities as NVIDIA and Synopsys integrate AI driven workflows, Omniverse tec…

Cairo Forum examines MENA’s path in the AI era

Experts at Cairo Forum warn of rising AI risks.

DeepSeek launches AI model achieving gold-level maths scores

An open-source AI model, Math-V2 by DeepSeek, achieves gold-level performance at global mathematics …

Europe boosts defence with Leonardo’s Michelangelo Dome

Rising defence budgets and investor interest are driving Europe’s push for integrated military techn…

Meta criticised for AI-generated adverts scams

Several fraudulent sellers were removed but the company continues to face criticism for failing to p…

Baidu emerges as China’s AI chip leader

China’s AI chip market is expanding rapidly, with Baidu’s Kunlunxin unit emerging as a key player in…

Coupang breach prompts scrutiny from South Korean regulators

Millions of Coupang users in South Korea affected by a breach prompting fresh questions over data-pr…

Spar Switzerland expands crypto payments across its mobile app

Switzerland’s supportive regulations and Spar’s new crypto integration highlight growing demand for …

Mixx and MVola services grow under Axian’s partnership with Mastercard

New Axian–Mastercard collaboration strengthens mobile-payment access across key African markets.

DigitalBridge joins KT to boost Korea’s AI infrastructure plans

Regional AI growth drives KT's alliance with DigitalBridge to build modern computing facilities.

Open-source tech shapes the future of global AI governance

China’s growing embrace of open-source AI is rapidly transforming global expectations about how futu…

South Korea retailer admits worst-ever data leak

The breach began in June but remained undetected until November.

Fraud and scam cases push FIDReC workloads to new highs

Scam-related complaints now dominate FIDReC's caseload, reflecting rapid shifts in financial crime r…

UK to require crypto traders to report details from 2026

CAFR-aligned reporting will target cryptocurrency transactions, with traders and exchanges facing pe…

DeepSeek opens access to gold-level maths AI

Open release of Math-V2 promises broader access to advanced reasoning systems, following its gold-le…

AWS unveils new agentic AI tools at reInvent

Deepgram expands speech AI across AWS platforms

Vanity Fair publisher penalised for cookie breaches

Visitors received unclear information as several trackers appeared falsely labelled as strictly nece…

ByteDance launches AI voice assistant for phones

China’s AI assistant market grows rapidly as Doubao outpaces rivals with millions of active users.

EU members raise concerns over the Digital Networks Act

Fresh resistance emerged around the Digital Networks Act as governments, consumer advocates, and the…

EU-South Korea digital partnership enhances collaboration

Shared efforts in research, standardisation and cyber policy move forward as the EU and South Korea …

Concerns grow over WhatsApp rules as Italy probes Meta AI practices

Regulators link WhatsApp policy changes to potential consumer harm, noting reduced choice in fast-gr…

Estonia invests in Germany to strengthen European tech independence

The opening of a new factory near Leipzig reflects deeper cooperation between Estonia and Germany an…

Salesforce expands investment in Greece with Greek AgentForce

New investment and local language AI support signal stronger engagement from Salesforce in Greece as…

New budget signals Japan’s move to steady tech investment

METI's budget requests reinforce Japan’s long-term technology and trade ambitions.

New digital strategy positions Uzbekistan as emerging AI hub

These brand new initiatives place Uzbekistan on a faster digital trajectory, with Nvidia's collabora…

AI detects chronic stress in medical scans

Scientists are using AI to measure biological stress signals in CT scans, helping to reveal health r…

Utah governor urges state control over AI rules

Cox argues federal control moves too slowly to manage fast-developing AI risks effectively.

AI police assistant Bobbi launches in the UK

A new AI assistant is set to change how people seek police guidance online by giving quicker answers…

South Korea’s Hopae boosts EU presence with €5 million investment

Europe’s upcoming EUDI Wallet rules drive Hopae's push to integrate more electronic identity systems…

Vietnam tops region in AI adoption and trust

The digital economy is set to reach 39 billion dollars next year, dominated by e-commerce growth.

Zero-click searches challenge Japanese websites

Survey data show around 63 percent of Google searches in Japan now end without a site visit.

Huawei and ZTE expand 5G foothold in Vietnam amid US concern

US officials warn Vietnam’s Huawei–ZTE 5G contracts could undermine network trust and restrict acces…

UK fast-track cyber recruits begin protecting defence networks

Defence leaders in the UK highlight the significance of the new cyber pathway as graduates begin wor…

New AI projects set to reshape South Korea’s public services

Cooperation with the United Arab Emirates will expand South Korea’s role in large-scale AI infrastru…

Apple opposes India’s antitrust law over $38 billion fine

The dispute arises after CCI found Apple limited third-party payment processors for in-app purchases…

AI-assisted diagnostics expand across Europe

Patient care, staff workload and operational efficiency are the top priorities for AI in European he…

European Commission begins assessing Apple Ads and Maps

DMA threshold confirmation for Apple Ads and Apple Maps now moves the European Commission into its n…

Brain’s reusable thinking blocks give humans a flexibility advantage over AI

The study suggests that understanding how the brain recombines skills could inform better AI systems…

Entry-level work transforms in the agentic AI era

Companies expand entry-level hiring despite AI adoption, valuing graduates who can evaluate AI outpu…

Crypto mining to become legal under Turkmenistan’s new law

Turkmenistan’s sweeping move to regulate its digital asset landscape signals a major shift in how th…

DeepSeek use halted across Belgian federal workplaces

Belgium has ordered all federal government officials to cease using the Chinese AI tool DeepSeek, ef…

EU moves forward on new online child protection rules

The Council endorsed new EU rules that oblige online providers to prevent the spread of harmful mate…

SAP expands sovereign cloud vision with EU AI Cloud

EU AI Cloud integrates models from leading partners and provides multiple deployment paths that help…

EU faces new battles over digital rights

Campaigners pressed for stronger action on tech-enabled gender violence during the 16 Days initiativ…

New ChatGPT Voice design aims to smooth AI conversations

Users can now mix speaking and typing in one ChatGPT chat window, with real-time on-screen updates.

Scarcity gives Europe an edge in the AI race

Investors are increasingly viewing Europe’s constrained energy landscape as a pathway to more resili…

Webinar: Catalysing Local Innovation Ecosystems in Africa

As part of the Catalyst 2030, the African Business Technology Network (ATBN) and the AfriConEU will participate at a webinar featuring leaders of digital innovation hubs, ecosystem builders and digital skills and entrepreneur supporters, who will discuss topics such as the digital innovation landscape in Ghana, Nigeria, Uganda and Tanzania, and the opportunities and challenges present within these spaces. Catalyst, launched in 2020 in Davos at the World Economic Forum by a group of social entrepreneurs in collaboration with governments and community groups, strives to inspire change and innovation in an effort to achieve the SDGs by 2030. From May1-5, the group will host the ‘Catalysing Change Week’, under the theme Solutions from the Frontline.

Sustainable Batteries – The building blocks of a circular economy

ITU and the Secretariat of the Basel Convention will hold on the 26th of May 2023 a workshop on sustainable management of batteries. Batteries are essential for ICTs and the smart management of energy. By enhancing their durability and recyclability, preventing waste, and improving their design, it is possible to minimize energy usage, safeguard human and environmental health, and decrease global greenhouse gas emissions.

The workshop will discuss how managing batteries sustainably is crucial for a digital and circular economy, and how international standards can support this effort. Participants will learn about selecting batteries for ICT infrastructure and hear from experts on how sustainable batteries are essential for building a circular economy.

Internet Governance Forum (IGF) Serbia 2023

The IGF Serbia has a new edition. On May 16, the Serbian IGF will take place in Belgrade, the capital of Serbia. Topics for this discussion will touch upon the issues of cybersecurity, privacy and infrastructure related to Serbia and the balkan region in whole

WP.6 Second Forum

Artificial intelligence (AI) is used more frequently in our daily lives, including search engines and rewards programs. The development of AI chatbots, like Chat GPT, has raised questions about regulation and standardisation. The UNECE’s Working Party on Regulatory Cooperation and Standardization Policies (WP.6) Second Forum will be held virtually on May 22-26, 2023. WP.6 examines AI from two angles: gender-responsive standards and product conformity. AI can contain inherent gender biases, and products with integrated AI can evolve and become non-conformant over time. The annual meeting in May intends to address these and other regulatory challenges and find ways to ensure that AI products benefit all consumers equally. For more information, access the following link.