Transforming technology frameworks for the planet | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Online Moderator

The summaries present an engaging inquiry into the practices of large technology corporations, often referred to as big tech, particularly in the realm of digital transformation. The central argument revolves around big tech’s extractivist approach, extending beyond data to include water and natural resources. This is seen as significantly contributing to the ongoing climate and ecological crises. The conversation points out that the ‘green’ solutions proposed by these companies have been problematic due to their inherent extractive nature. This substantiates the negative sentiment woven throughout the discussion.

Another focus topic in the discussion is electronic waste, also termed as e-waste. This is increasingly produced as a byproduct of significant digital transformation and infrastructure expansions. The problem of responsibility for e-waste is underlined, highlighting the associated Sustainable Development Goals (SDGs) related to responsible consumption and sustainable urban environments. The Nodotao project in Argentina, which addresses e-waste, is cited as a supporting evidence. However, the query regarding who should be accountable for managing this still lingers.

Furthermore, the role of governments in instigating this situation is sternly questioned. They are criticised for funding traditional big tech models, thus displaying a lack of support for alternative technological business models. This criticism is particularly directed at local governments in Latin America, implying an inequality in resources distribution and hindrance of innovative potential in these regions.

In addition to the central debates, the summary also shines a light on the underpinning themes linked with the Sustainable Development Goals (SDGs). These include SDG 9: Industry, Innovation and Infrastructure; SDG 10: Reduced Inequalities; SDG 11: Sustainable Cities and Communities; SDG 12: Responsible Consumption and Production; SDG 13: Climate Action; and SDG 15: Life on Land.

On the whole, the discourse emphasises the urgency for responsible, sustainable practices in digital transformation, challenges the extractivist model of big tech, calls for governmental reinforcements for alternative business strategies, and advocates for accountability in e-waste management.

Becky Kazansky

The analysis spans a wide range of themes intersecting technology, sustainable production, and climate action. A dominant sentiment of concern emerges regarding the environmental impact of emerging technologies such as Artificial Intelligence (AI). Evidence suggests that every five enquiries made to AI chatbots result in half a litre of water being used, raising questions about resource consumption. Significant criticism is further directed towards carbon offsets, primarily due to evidence that over 90% of validated and standard-conforming offsets are ineffective and do not operate as anticipated.

Against this backdrop, the EU Green Claims Directive emerges as a positive development. This innovative policy aims to enhance transparency in sustainability claims, empowering consumers to discern the true environmental impact of products. This directive also dispels the notion that companies can achieve climate neutrality or sustainability through carbon offsets alone.

Further scrutiny in the realm of carbon markets and offset mechanisms is encouraged. The analysis suggests that even well-intended strategies may be inadequate, with bona fide carbon offsets often failing to function ecologically as initially planned. Civil society is urged to pursue a more comprehensive and fundamental critique of carbon offsets, highlighting the need for decisive climate action strategies.

Solar geoengineering, a speculative technology, warrants examination due to its potential to exacerbate rather than mitigate climate change. This technology, which necessitates broad-scale coordination, has solicited scepticism from scientists worldwide. Over 400 scientists question the practicality of governing such an expansive, potentially hazardous technology, advocating instead for a precautionary approach.

The analysis also voices strong support for just transitions – socio-economic and environmental strategies seeking equitable outcomes for society at large. A call for action is made to challenge potentially misleading climate solutions, a contentious issue the climate justice movement has been fervently addressing for decades.

The need for robust regulation of speculative, potentially harmful climate technologies is emphasised, amidst concerns over excessive investment by tech giants. The need for greater engagement and open dialogue surrounding these controversial climate technologies is also underscored, considering the propensity of large tech corporations to invest heavily in such technologies as part of their ongoing profit models.

In conclusion, the analysis highlights the profound links between climate action, sustainable production and innovative technology. It brings to light pressing issues over resource management and the veracity of ‘green’ strategies, underscores both regulatory and consumer measures to scrutinise and verify sustainability claims, and stresses the need for thorough critique, regulation and discussion around speculative technological responses to climate change.

Onsite Moderator

In her reflections, Kemly Camacho affirmed the paramount importance of incorporating human-scaled values, such as solidarity, friendship, happiness, and passion, into the spheres of globalisation and digitalisation. She emphasises the integration of these values into business models, accounting, project management, and team collectives as promising pathways to effectively tackle significant socio-economic and cultural matters. She also heralded non-profit business models as viable, sustainable solutions capable of addressing these challenges. The sentiment expressed towards this approach is categorically positive.

Further, Camacho ardently advocated for the formation of alternative business models as a potent solution to the ongoing climate crisis and the worsening contributions of the digital economy. She underscored the unsustainability of current models due to their heavy reliance on extractivism. Pointing to organic agriculture and social economy, she proposed these as positive examples of alternative models that prioritise sustainable business practices.

The Onsite Moderator voiced the belief that it is possible to foster a digital economy that respects and upholds planetary justice, environmental justice, care, and solidarity. Such principles are recognised as integral to realising SDG 8 (Decent Work and Economic Growth) and SDG 9 (Industry, Innovation and Infrastructure). Moreover, the intersectionality of environmental sustainability and the exercise of digital rights, both online and offline, was highlighted.

The pivotal role of governments as allies and champions of environmental justice was acknowledged. The Moderator posits that cooperation, standardisation, global norms, and internet governance in the digital realm can offer significant support to these governmental initiatives and facilitate a fair and just transition.

There was an emphatic call for governments to take bold steps in supporting alternative business models, particularly in light of the climate and ecological crisis. It was argued that governments should not only tackle the sustainability challenges associated with Big Tech’s business models but should also allocate funds to promote alternative business models. The limiting and problematic elements of Big Tech’s model, particularly its generation of e-waste and overdependence on data extractivism, were spotlighted as areas requiring significant overhaul and improvement.

Camacho stressed the need to pivot from digital transformation to digital appropriation. Traditional models, including start-ups, unicorns, and big techs, were identified as requiring a reevaluation as they priortise value addition and accumulation over redistribution and solidarity. She championed digital appropriation as a means to curtail consumption and develop essential digital tools.

Finally, the importance of considering different contextual factors in AI usage and data collection was underlined. Solutions need to be customised and tailored to respective communities, with the global community ensuring that those impacted are meaningfully included in discussions. The role of local communities was emphasised, and the voices of those affected were recognised as essential to the decision-making process.

In summary, the predominant sentiment advocates for a paradigm shift in business practices towards more sustainable, inclusive, and just models. This shift is expected to support several UN Sustainable Development Goals and pave the way for a sustainable digital economy and responsible AI usage.

Jaime Villareal

The May 1st Movement Technology Cooperative promotes social and environmental justice by providing an autonomous communications infrastructure. This infrastructure, which is collectively owned and democratically governed by the cooperative’s members, supports communication services, such as email, web hosting, and file sharing. This cooperative model promotes democratic leadership and communal ownership, contributing significantly to societal growth and development.

Contrasting with data-centric corporate internet services, the cooperative’s primary focus is not on data collection or data mining. Members consistently vote to maintain the infrastructure free from surveillance or exploitation, emphasising transparency and respect for privacy.

However, the cooperative faces challenges due to resource scarcity, limited capital, and lack of suitably-located server facilities. Constraints include insufficient funds for building personal data centres or gaining direct access to renewable energy resources, and finding cost-effective solutions for managing electronic waste is a challenge.

Despite these hurdles, the cooperative strives to increase environmental sustainability and reduce their carbon footprint. The cooperative’s operations are less environmentally damaging than corporate internet services owing to their avoidance of an extractive business model.

The cooperative strongly opposes corporate internet services’ surveillance and data collection practices, viewing them as coercive and exploitative. They critically analyse the capitalist narrative that advocates for high-yield businesses as the sole solution to climate change. They argue that the implementation of policies such as artificial intelligence (AI) — fuelled by data extraction and knowledge accumulation — have significant environmental and societal impacts.

Favouring collaborative working, the cooperative advocates for community or cooperative-based models for climate control and societal issues. They emphasise on fostering long-term sustainable development through engagement, communication, and cooperation, rather than domination and extraction.

The cooperative is critical of businesses that participate in ‘greenwashing’, making false claims of environmentally-friendly practices, while operating with extractive business models. Additionally, they reject the proposal of paying fines or taxes as atonement for corporate misconduct, comparing it to the flawed carbon credit system.

They express concern over large companies’ unauthorised use of user data for AI model training, deeming it exploitative. There’s also worry over users being unknowingly coerced into participating in AI training.

The cooperative opposes universal solutions for preserving local languages and indigenous cultures, insisting that proper consultation with local communities is vital. They stress the importance of recognising each community’s unique needs and interests. Overall, the cooperative is firmly dedicated to privacy, community engagement, and environmental sustainability, continuing to navigate through their challenges and make strides towards achieving their goals.

Florencia Roveri

Florencia Roveri champions the concept of a digital economy that incorporates elements of environmental justice, sustainability, e-waste management, and digital inclusion. This is exemplified by their organisation’s establishment in 1995 by a team of engineers, educators and social activists. The main motivation for their initiative was the growing need for effective and sustainable management of the increasing volumes of e-waste sourced from companies.

Their innovative and proactive step in transforming their e-waste management facility into a cooperative, initiated by seven founding members, was geared towards handling complex responsibilities such as production, commercialisation, and habilitation. This action demonstrated an awareness of the multifaceted challenges presented by e-waste and aimed at promoting social inclusion by incorporating more young individuals into the workforce. This aligns with the aims of SDG 8: ‘Decent Work and Economic Growth’.

Roveri emphasises the necessity for comprehensive e-waste management plans where responsibilities are shared amongst numerous actors. This includes government bodies playing a role in facilitating the disposal process, and companies generating e-waste ensuring its appropriate management. This reflects the importance of a united effort in achieving environmental sustainability, aligning with SDG 12 and 17—’Responsible Consumption and Production’ and ‘Partnerships for the Goals’, respectively.

Roveri also tackles a significant misconception about e-waste, underscoring it’s often misperceived as a ‘donation’ when, in reality, it’s a significant issue. They highlight the costs and risks associated with processing e-waste, demonstrating that it simply transfers the problem to other actors.

Moreover, Roveri proposes the idea of e-waste management being recognised as a public service due to its global impact and pervasive implications. They acknowledge the challenges in managing e-waste given its complex nature and the involvement of various stakeholders but also recognise the potential profits diligent e-waste management could yield.

Lastly, Roveri advocates for viewing e-waste management not solely as an environmental imperative but also as a potential source of job creation. They suggest it could serve as a solution to the ‘digital divide’, emphasising its societal and economic significance.

In conclusion, Roveri offers a comprehensive perspective that integrates the roles of diverse stakeholders to tackle the challenge of e-waste management effectively. This collective approach utilises e-waste management as a tool for job creation and a bridge to span the digital divide.

Kemly Camacho

Kemly has highlighted the urgent necessity to explore alternative business models, emphasising the gravity induced by societal factors as well as environmental crises. These new models are specifically designed to break socio-economic impasses and champion feminist entrepreneurship alongside businesses that regard care and solidarity as central principles. These values-driven business approaches have been identified as critical in addressing a complex interrelation of social, cultural, and economic issues.

A thorough critique of traditional models within the digital economy reveals their shortcomings in supporting entrepreneurs grappling with socio-economic problems. Notably, entrepreneurs frequently encountered obstacles in securing essential finance and technical support. This examination has been increased noting that business plans centred on fostering social and cultural awareness are rarely seen as viable under existing digital economy frameworks.

Kemly has further marked the current global environmental crisis as a patent symbol of urgency, necessitating a comprehensive reform of established business models. Predominant models, underpinned by extractivism, are now perceived as unsustainable, urgently demanding innovation.

Urgent changes in prevailing digital transformation narratives among governments, academia, and start-up ecosystems in Latin America were proposed. Currently, the dominating ideologies incline strongly towards consumption-based models. The recommendation for academia, incubators, and governments is a drastic revision of business methods and an uptake of digital appropriation models, which significantly contrast the current focus on consumption in digital transformation initiatives.

The dominant models within the digital economy and traditional business, owing to their extractive tendencies, have been subjected to rigorous critique, especially given the emergence of new values such as solidarity and care. This critique strongly advocates that platform companies should pivot their business models from value extraction and instead, concentrate on fostering and accelerating solidarity and care.

The digital appropriation strategy could present a valuable remedy, especially pertinent in the post-pandemic era. It accentuates the need to identify useful digital tools, aiming to reduce wasteful resource use. Furthermore, technology frameworks should echo this sentiment, focusing on solving tangible, real-life problems faced by women, including childcare and community care.

The concept of fair employment is emphasised as central to business models like cooperatives, and its vital contribution to the survival of humanity is unequivocally stated. Nonetheless, concerns have been raised about the growing acceptance of precarious work and the practice of charging for machine-learning training. These are seen as threats to the principles of human survival and equitable access to digital resources, respectively, thus underlining the necessity to integrate socio-economic and environmental sustainability and care-oriented values within current business models.

Audience

The comprehensive discourse highlights the divisive perceptions concerning the current practices of AI companies, particularly regarding their data usage and training techniques. Notably, there is a prevailing negative sentiment surrounding AI companies exploiting data without providing compensation or respecting copyright laws, a standpoint seen as discourteous, prompting suggestions to revisit these practices and potentially, levy relevant taxation. This concept is based on the understanding that the sophistication of AI relies heavily on the consumption of substantial data volumes, however, in the existing scenario, there is no remuneration structure for the people who generate or own the data.

On a positive note, there is substantial advocacy for delving into the economics of artificially intelligent platforms, reflecting the sentiment that there is a necessity to make AI smarter. Although this argument does not deliver direct supporting facts, it implies an expectation for a more robust and intelligently engineered AI system that is propelled by an integrated understanding of economics and data science.

Further positivity emanates from the discussion on innovation, particularly with the focus on alternative technology frameworks. Dialogues on this topic have spotlighted cooperative models as potential solutions. This argument suggests that the evolution of technology frameworks, specifically those with elements of social, ecological, and feminist policies, could be the key to surmounting prevailing challenges.

Simultaneously, the impact of AI on the industry landscape of Japan is notably significant. The transformative change ascribed to AI is predicted to disrupt the existing ‘pyramid’ structure prevalent in the industry. Insights indicate that the smaller ‘worker’ roles, traditionally executed by humans, are being replaced by AI, signalling a shift in the dynamics of the digital industry.

Indeed, this transition also emphasises new opportunities for work styles and business models. Within this ever-changing landscape, it’s suggested that AI training could emerge as a novel style of work, particularly for those proficient in Japanese, pointing to an evolving job market.

Conclusively, the analysis identifies a distinct disparity between current and AI-introduced business models. It suggests a shift in the layered fabric of the Japanese industry, indicating a dichotomy between a rich industrial history and the transformation instigated by AI-driven models.

Overall, the analysis presents a holistic image of the ongoing structural, operational, and ethical debates surrounding artificial intelligence. The future path seems to advocate diversity, questioning antiquated practises, and forging ahead with more cooperative, equitable, and mutually beneficial approaches for humans and AI.

Yilmaz Akkoyun

The BMZ, Germany’s Federal Ministry for Economic Cooperation and Development, is actively striving to enhance societal, political, and economic participation among individuals in its partner countries. A particular emphasis is placed on the most marginalised sections, demonstrating the ministry’s commitment to establishing a comprehensive, holistic approach to address the root causes of multifaceted issues.

Despite these efforts, a considerable digital divide exists globally. Nearly half of the world’s population lacks internet access, with internet usage dropping to fewer than 40% in partner countries. Worryingly, women and marginalised communities bear the brunt of this divide, highlighting significant and widespread inequality in digitalisation.

To counteract this issue, the BMZ has backed a fair, secure, open, and free internet under the banner of the Global Digital Compact. This step is considered a crucial driver in achieving the Sustainable Development Goals (SDGs). Actively engaging in the associated dialogue and processes, the ministry is intent on promoting an inclusive digital transformation that is environmentally friendly, socially conscious, and feminist.

A human-centred perspective is core to digital transformation. Germany, in collaboration with the European Union, is shaping digitalisation to address potential environmental, human rights, and societal risks. The country’s digital policy is underpinned by three core elements: establishing standards and norms, Digital Public Infrastructure (DPI) and developing digital skills within society and the economy.

Importantly, digitalisation is being employed as a tool to actively combat environmental challenges. Germany partners with countries around the globe to advocate fair regulation of the digital economy. This is exemplified by their collaboration with Smart Africa in developing national Artificial Intelligence strategies focused on environmental challenges.

Education is pivotal for the successful enactment of digital transformation. Germany’s commitment to encouraging digital skills is demonstrated through platforms such as Attingi, which has engaged over 11 million individuals, most notably advancing young women’s comprehension of digitalisation.

Simultaneously, Germany expresses concerns over the misuse of data and the risk of exacerbating social divisions. Therefore, they are committed to ensuring their digital policy promotes a safe, inclusive internet and fair data markets in partner countries to circumvent these issues.

The sustainability of waste donation is questioned, with an expressed need for increased education in waste management. In terms of equity in digital transitions, the BEAMSET digital initiative supports fair digital transitions in partner countries. The initiative Fair Forward contributes to this goal, working to develop open-source AI models to stimulate local innovation.

The importance of economic aspects within these engagements is recognised, yet a global discussion on the topic is deemed necessary. In terms of international partnership, BMZ contributes significantly to global politics, maintaining robust relationships with a broad international network of governments and other stakeholders, especially civil society actors. This underlines the urgency of integrating local and national perspectives from the Global South into the international discourse.

In conclusion, according to the BMZ, global digital cooperation is essential for supporting a holistic approach to digital transformation. The guiding focus is on fostering international partnerships to drive digital transition that is both socially and environmentally sustainable.

Session transcript

Kemly Camacho:
models and feminist economy proposals. Not only for our own business, but also, as I said before, to create incubators for entrepreneurship, especially for women in IT, to develop other kind of business models. We have learned how to integrate solidarity, friendship, happiness, passion in business models. And we have create business models where these words and where these beliefs are part of the business model, part of the accounting, part of the project management, part of the team collective, yes? And we have learned how to develop non-profit business as a strategy to respond to social, economic, and cultural problems. For the social economy model inside the digital economy, the business part is an answer for the social, cultural, and economical needs. It’s not the main issue, the business part, yes? That comes from the social economy perspective, yes? Non-profit business models where the business is the answer to these social problems. Inside the IT society who have a specific problem related with the digital. And we also have learned how to put care in the center of our business model, and that comes from the feminist economy. Care in the center of business models. models, then solidarity in the centre of business model, care in the centre of business model, non-profit models, yes, in the digital, for a digital, non-destructive digital society. There are important challenges. We have launched last year, in alliance with the National Centre for Training in Co-ops and in alliance with the University of Mondragon, and we have incubated, since 10 years already, digital feminist initiatives based in a model that we have created, who began with a feminist hackathon, which is very different than a normal hackathon, yes, it’s a hackathon not for competition, but for sharing, yes, and everything, I cannot go in details, but we have created this incubator of digital feminist initiatives, but of course we confront in our context difficulties in the innovation ecosystem to try to support these initiatives, platform co-ops or feminist entrepreneurship for the IT and for the digital society, because they not fit in what they understand is an innovation, or in what they understand is a business model for the digital economy. Then the access to finance, to support, to technical support, is very hard and difficult. We have to create them ourselves also. Then this is a main challenge. We really believe there is a need to create alternatives and demonstrate there are other ways to develop the digital economy, and we feel it’s urgent. The planet is burning, and the answer is not in the business model that we have created until now. We have to create other business model and demonstrate that it’s possible to develop a digital economy not based in extractivism. There are movements in advance of ourselves where we can look at examples, and always I put the organic agriculture as an example of how we can develop these other models. And also, of course, the social economy. They inspire our experience, and we hope we inspire you to propose alternatives. Thank you.

Onsite Moderator:
Thank you so much, Camly, particularly for reminding us that it is possible to have a digital economy that contributes to planetary justice, environmental justice, not dissociated from care and solidarity. So I think that’s very important, and that’s the change of paradigm. So with that, I would like to invite one of our remote speakers, Florencia Roveri from UNODO TAU, who is joining us from Argentina. So, Florencia, can you hear us? Can you confirm if you can hear us? Yes, hello. Good night. We can hear you perfectly, welcome. So nice to see you, Florencia, welcome to the panel. And I would like you to please, Florencia, tell us what motivated UNODO TAU to transform your e-waste management facility into a cooperative, and what has been the impact so far? Also, we are very curious to hear about the obstacles that you have faced in that transitioning process. So you have the floor, and welcome again, Florencia. Thank you, Valeria, and thank you for the invitation to share our experience in this. Can you hear me? Yes, Florencia. We can hear you. Go ahead, please. Okay.

Florencia Roveri:
For sharing our experience in this instance, in this pre-event, it is very valuable for us to participate and add our view. We are a social organization in Argentina, and as Valeria mentioned, we are a social organization that works for the digital inclusion of organizations created by a group of engineers, educators, and social activists. We move from this objective of working for the access to technology to deal with the access of technology. In that sense, we developed a plan for the treatment of e-waste. We began in – sorry, I am going to start again, sorry – that was a social organization created in 1995, and we developed our work in that sense. The plans we create, it has to do with the access of machines we start to receive from companies, machines that we delivered for the organization of a network of telecenters. The plan was created in 2019, and it was in the frame of a local program of work inclusion. And it was formed by six young men and one woman, accompanied by three members of Nodoktau. The plant receives mainly e-waste from companies and public bodies that must, sorry, sorry, sorry, Valeria, I am dealing with my nerves and the distance, I have to reorganize, sorry. It’s okay, it’s okay Florencia, if you prefer we can come back to you in a bit, is that something that you could like us to do, then we can invite him and then we can go back to you? Is it okay? It’s okay, it’s okay. Okay, but if you prefer to continue, if you prefer to continue now, it’s also okay, just let me know what your preference could be. We create the plant, in that frame I was mentioning, and four years after the creation of the plant we start leading with some aspects related to the focus of Nodoktau in the work of digital inclusion but also in the sustainability of the plant. So we need to face that challenge of articulating the dynamic of the organization and of the plant management. Our focus was a more general work and the cooperative was growing and, sorry, go on with Jaime, sorry for that, sorry for that.

Onsite Moderator:
Thank you so much, Florencia, for sharing your experience, and then you can add anything when we are in part of the conversation, feel free to jump in and add anything that you might want to share with the audience. Now let’s turn to Mexico and to invite reflections from Jaime Villarreal from the May 1st Movement Technology Cooperative. Jaime, it could be very useful for us to hear your perspective on why it is important for May 1st Movement Technology to be a democratically run, not-for-profit cooperative. Please do share your experience with us. You have the mic.

Jaime Villareal:
Thank you, Valeria. So when May 1st members join our cooperative, they’re choosing to join an organization that supports building movements for social and environmental justice, right? And our specific focus within that is the role of technology in both local and international movement struggles. And so in addition to the movement outreach and engagement that we do, one of May 1st’s central projects is maintaining our own autonomous communications infrastructure. And what that means is we run our own servers, our own internet servers. We provide email, and web hosting, and file sharing, and other communication services like video conferencing for our members. And as part of our cooperative, that means our members of the cooperative collectively own and we democratically govern together this infrastructure. And what that gives our members the power to do is that year after year, they consistently vote to maintain this project, to keep our own infrastructure, and to keep that infrastructure free of any kind of surveillance or exploitation. And this is really, really important. A lot of times people ask us, so are you creating an alternative to the corporate internet services? And, I like to say that, no, we’re not an alternative to Google or to Meta or Amazon or those because we are focusing on the needs of our members. We are providing the tools that facilitate communication, that allow them to organize and take action to create a better world on their own terms. And contrary to popular belief, this is not something that corporate internet monopolies are in the business of doing. They are not facilitating communication. Their core business is data collection and data mining. Any communication services they provide are just hooks, are carefully engineered to coerce consumers into giving up their privacy. These business models are fundamentally extractive and exploitive. So because these companies collect and store petabytes of data, personal data from citizens and from consumers, the necessary computing resources and the environmental impact of running their operations is astronomically larger than our own. So, in terms of environmental sustainability, we are already at an advantage simply because we do the right thing and we don’t engage in this kind of surveillance and data collection. But, aside from that obvious benefit of being free of surveillance, our members are still interested in us finding new ways to increase our environmental sustainability and to reduce our carbon footprint. Unfortunately, for an organization of our size, our options are limited. Where we can place our servers is limited by both human resources and by access to high-speed broadband. We, as a small organization, simply don’t have the capital to build our own data centers that would be closer or have direct access to renewable energy resources. And also, finding cost-effective solutions to processing our own e-waste is also a challenge. And so, that’s something we’re interested in learning from other APC members about, like Nolotau. So, this is the advantage, I think, that comes from allowing our members to guide our own project in being a cooperative model that gives our members a voice and a vote and control and ownership over their own communications.

Onsite Moderator:
That’s a very powerful experience to share. Thank you so much, Jaime. And also, because it illustrates the interplay between environmental sustainability and the reinforcement of the exercise of rights online and offline. So, that’s quite interesting and inspiring. So, governments, obviously, governments could be key allies and champions for environmental justice. And in that sense, we are very happy to have Gilmas here in the panel with us to share the perspective of the German public strategy and approach on this field. So, Gilmas, what does cooperation mean in the context of digitalization from the perspective of BMZ? So, welcome. And also, if you can also, perhaps, let me just add something. So, if you can also touch upon how can global norms and standards relating to internet governance and environmental governance can support these cooperative models and these approaches, and in that sense, work all together towards a fair transition, a just transition. So, welcome. and let us hear your views.

Yilmaz Akkoyun:
Dear Valeria, thank you so much for this interesting question. It’s a great honor and pleasure to share my views on behalf of the BMZ on day zero here at the IGF and to learn more about the work of cooperatives around the globe and take it back to Berlin to also check how our work is aligned with what cooperatives do. Let me start with the first question and then tackle the second one too. So cooperation is at the heart of what BMZ does. So what are the plans for the future what projects are CVID-19 efforts and what’s your vision of what the future of our CVID-19 efforts should look like? Yes, I’ll start. So my name is Philip. It’s a great pleasure to meet you. We very kindly prepared and introduced our whole team to the question of our Media Society, what is your wish for the future of CVID-19. economic cooperation and development, as German BMZ, we want to enhance economic, political and societal participation of all our people in our countries, partner countries, especially the most marginalized. This is our mission and cooperation with our partners is essential for the holistic approach, necessary to address the root causes of the complex problems that we are facing today. These global challenges did not become easier if we consider climate change, which you just mentioned, pandemics, poverty and the fight against hunger. They all require coordinated responses that go beyond individual projects and now let me get to what cooperation means, especially in the field of digital affairs, digitalization. If we look how the world looks today, it’s happening very unequally. Almost half of the world’s population do not have access to the Internet. We are here in Japan and the access, of course, is very different if we consider partners in the global South. There, women and marginalized communities are particularly affected by this digital divide, whereas more than 90% of people are online in the European Union or Germany, where I am from. In our partner countries, fewer than 40% do have Internet access and we are working to change this. So cooperation is key for us in addressing these issues and I think we need conversations between countries from the global South and the North to make digitalization benefits all. Staying true to the claim to leave no one behind, we first need to make sure that everyone can benefit from digital transformation. What does it mean? This means prioritizing inclusivity and promoting meaningful equal access to for all people, especially in these vulnerable and marginalized communities, are essential and yes, global norms are essential in doing this We have three cornerstones in our digital policy work to get there and norms are one of them We could talk about this in our own panel, I would say but for us, they are essential, especially digital public goods and this is very important for us in our work and yesterday you had, I think, one conversation of the role of the Global Digital Compact and we are very engaged in that process, in the dialogue, contributing and I think we have an interesting road ahead to the summit of the future in getting there and shaping this together and learning from you is helpful for us to contribute in that process and engaging in these norms and we aim to promote a fair, free, open and secure internet this is also for me part of the norms you mentioned to get a digital transformation which is ecological, social and feminist and in this way, the digital transformation can be a driver of progress towards achieving the SDGs where we are now on our half-time, if we consider the agenda 2030 I hope this answered your question for now and I’m looking forward to the dialogue here and thank you for inviting me again

Onsite Moderator:
Thank you, Yilmaz As we heard from Ken Lee, if we want the digital economy to really contribute to planetary justice then the consideration of feminist and gender perspective is crucial I don’t think we can get that without considering that aspect. And last but not least, and before inviting Paz to also pose some questions for you, I would like to invite Becky to share your perspective. Becky, as a researcher in the field, how do you see this conversation of transforming technology frameworks and advancing planetary justice in the governance of technology in relation to recent policy developments such as labeling? So, very curious to hear about it.

Becky Kazansky:
Hello. Thank you so much for having me this morning and for being part of this discussion. So, my name is Becky Kuczynski. I’m a researcher at the University of Amsterdam where I study the just governance of climate technologies. Thank you very much. Okay, I think it’s on now. And for the last year or so, I’ve been collaborating with a number of different organizations and networks and APC included, to think about the values and principles that can guide more collaborations across different civil society movements. To think about technological governance that can support environmental and climate justice. And as part of that work, we’ve been brainstorming on what a theory of change can look like on this really, in collaboration with a number of different partners and collaborators. So, I wanted to share a little bit about some of the biggest themes that have come up from this process because I think it really brings home how important the kinds of models are that the other speakers have really beautifully illustrated already. And I would say that one of the most important themes that has come up is, you know, if on the one side it’s essential to support alternative models for technology through collaboratives, co-ops, and other models. On the other hand, it’s essential that we, and I mean we in this as a group of, you know, different kinds of stakeholders coming together today. That we don’t get distracted by technologies and tools that on the surface can seem quite promising for mitigating or adapting to climate change, but which have already proven to be quite harmful to different kinds of communities and populations and countries around the world. And so in not getting distracted, this would provide more room for support for the kinds of models that are being discussed today. And I’ll give you a few examples of the kinds of distractions that have come up in the policy space recently and around which there is movement on the policy side to either reform or think strongly about restrictive governance and even going further than that. The first is around AI, which is a subject that comes up a lot these days. And I’m thinking specifically of the fact that in the climate governance space, the UNFCCC has recently announced a new initiative to support an exploration of the promise of AI as a climate technology. And on this front, what civil society really is arguing for is that if there is gonna be a wide-scale investment, and there already is, into data-driven technologies like AI, then we have to make sure that their promises on the one hand aren’t oversold by certain actors, and on the other hand, that the harms that already are apparent are actually taken account of in further movements around AI. And I’ll give you one example in particular, because AI is now being framed as an important tool in the food and water security nexus. Recently, there have been studies coming out that, and it has been historically very difficult to measure exactly what the impact of AI has been on climate, water, and resources. But there have been studies showing that, for example, half a liter of water is spent for every five queries that someone makes to chat GPT. So to put it into context, that’s a large amount of water for someone sitting in front of their computer and asking five questions to something that is powered by a very, very water- and resource-intensive infrastructure. So, and that is just one example from the climate side, but there are so many other human rights-centered harms that have been raised for decades now by civil society. And this really brings into question then, and I think offers a lot for consideration and food for thought. as AI gets invested into as a climate technology. And a very quick second example would be around solar geoengineering. This is a long promise, but also very still quite speculative technology, which in the last year governments have actually announced an interest in investing and experimenting with on a scale that hasn’t necessarily taken place before. This is a technology for which there is currently no system of governance. And in fact, 400 scientists from around the world question whether it is even possible to govern something like solar geoengineering because it requires enormous wide-scale coordination across the world. And once it is put into place, it creates a lock-in because if it were to stop, it could make climate change and global heating much worse and much faster. So on this basis, a number of different groups are pushing for moratoriums, bans, and basically trying to invoke the precautionary principle to create the space to step back and ask whether this is something that actually we, citizens and people living across different regions of the world should consent to because once it’s put in place, the impacts can be enormous. And one final example from the positive side of recent policy developments is around the EU Green Claims Directive, which is an innovative piece of policy which would help the consumer understand. which products that they are interested in purchasing are actually living up to the many claims around sustainability that different companies make in terms of net zero and otherwise. This is great progress. However, again, it’s clear that there’s a lot to figure out in how this works in practice. And I’ll give you one example, which is around carbon offsets. So carbon offsets are not directly addressed, as I understand it, by the Green Claims Directive. However, it does take the perspective that a company should not be able to claim it’s climate neutral or sustainable simply because it makes use of carbon offsets. This is an important acknowledgement, and I think it responds to a number of different scientific studies, consensus around that that is building, and also pushback from civil society for several decades now, saying that carbon offsets do not actually work ecologically the way that they are set out to, and that they provide cover for companies to make claims that they can’t necessarily deliver on. But what’s really important to ask there is how far can this Green Claims Directive go? Because a lot of civil society is pushing for a more fundamental critique of carbon offsets. They would say that it’s not enough to simply say, well, you can only use a certain amount of carbon offsets. They would say, actually, the entire carbon market system needs either reform or even something more drastic than that, because at the moment, over 90% of carbon offsets, even the verified ones. even the ones that conform to all the standards that have been set, are not working. And so, I’m gonna leave it there and simply say that there are a lot of questions here, obviously. There are positive governance and policy developments in this regard, and the hope is that by pushing these further and not getting distracted by risky speculative technologies, that more support is available for the kinds of initiatives that we’ve heard from today. Thank you.

Onsite Moderator:
Thank you so much, Vicky. That’s also very powerful, and particularly, Nadia, how you remind us about the need to apply the precautionary principle. I think that’s a must. Hopefully, it has been taken seriously by all the necessary stakeholders. With that, I want to invite my co-moderator, Paz Peña. First, to check also if you have questions for the speakers, and if we have also interventions from remote participants. So, Paz, over to you.

Online Moderator:
Thank you, Valeria. Just to remind our online participants that they can actually share their comments and questions on the chat or on the Q&A tool. I just want to make a couple of general questions, open questions to our participants. I think, based on what you have said, the big question here is, what is digital transformation in the context of climate and the ecological crisis, no? So, as Becky said, in a way, you have two answers, no? One is the green responses that big tech is giving. which, by the way, are super problematic because of the extractivism nature of big techs, not only in terms of data extractivism and all the infrastructure that you need for that extractivism, but also because of water extractivism, natural resources in general, et cetera. So that is one thing that is super important to address, but in a way, I believe that governments especially are forgetting to actually look other business models besides the big tech model in their own local companies of technologies, no? So it seems that, and this is something that I’ve been learning in all countries, for example, in Latin America, all governments try to give funds to companies to replicate the business model of big tech, no? In a way, more data, more growth, et cetera. And that is why I think it’s important to ask ourselves what is digital transformation then in the context of this climate crisis? We want more data. We want more growth of that infrastructure because that means, for example, that we need to deal with e-waste. And this is what the incredible initiative that Nodotao is doing in Argentina, but who is paying for that? Is big tech paying to organizations to cope with e-waste in local countries as Argentina? Who’s paying for that? What is doing our, what are governments doing with all that kind of e-waste that we need to deal with when we are saying digital transformation is? more big tech, et cetera, et cetera, no? So I think my next question, besides what is digital transformation in this context, is then what is the role of governments? Not only to deal with the problem of sustainability of big techs, but also in terms of basically fund fund the alternative business models, no? Because here in Latin America, we have a very historical tradition of different technological business models that sometimes they fail because they don’t see support, no, from local governments. So what is the role of local governments in there, in the context of climate crisis and ecological crisis? I think those questions are key in order to actually transform, radically transform the planetary ways to, the ways that we see the planetary crisis from technology. Thank you very much.

Onsite Moderator:
Yes, thank you. Thank you, Paz, and also in addition to what role governments have in supporting these alternative models, I would like to also add how that support should look like concretely in practice. So both our remote participants and our speakers, remote speakers and speakers on site are invited to respond and to react to these questions that Paz has also brought up. So if any of you could like to respond, Kimberly, please go ahead. Remote participants, okay.

Kemly Camacho:
Okay, it resonates me a lot what our colleague said. here about not putting the emphasis in the green, but in the models that we are using. And that connects a lot with what Paz said before. Because at least for Latin America, you know in the imaginary of our governments, but also of our citizens, and also of our academia, the model, the big model, yes, is startups, unicorn, big techs, yes? This business model, this way to do economy is the ideal way, is the road, the path where we have to go. And I think it’s crucial, yes, to really rethink these models, yes? And when I say rethink these models, is really change tools, change approaches, change methodologies to develop business models and digital economy, the same thing, yes? We have to change that. We, since after the pandemic, or during the pandemic, that we all in Latin America talk of the solution was the digital transformation, we always said, this is not the solution. The solution is digital appropriation, which is totally different for ourselves. Digital transformation is oriented to consumption. Digital appropriation is oriented to reduce consumption, to think which digital tools you really need, which digital business you really need to develop. For us, there is a main, main difference between digital transformation and digital appropriation. And we and go and advocate for digital transformation. I have to say, when I talk about changing the business models, really develop tools, and I’m calling academia, I’m calling incubators, I’m calling governments to really rethink the way that we are doing business. And I’m going to put concrete examples, I don’t know who of you have worked with the canvas model to develop a business, yes? And in this canvas model to develop a business, the center of the canvas model is the value added of your business, yes? What we have done is put in the center the solidarity and care that your business is going to improve and develop, yes? Before the value added to get money, yes? And also we put, instead of putting in the center of this canvas model the accumulation, we put the redistribution of the resources that you are going to make if you develop this business model. Then we have to change that, yes? Because for us, this is in the center of the development of our society, and we cannot talk green if we are using unicorn startups and big companies as, and platform companies, not platform co-op, but platform companies as the model of the digital economy and as the model of our entrepreneurship in our countries. Then answering a little bit the question, this is my reaction. And also remember, remember to all of you, extractivism is everywhere. because we talk about extractivism for the natural resources, water, all of that and it’s crucial, fundamental but also it’s extractivism of wisdom, extractivism of knowledge of the people extractivism of solidarity, extractivism of the time extractivism is the center of this model then this is my reaction, Valeria.

Onsite Moderator:
let me invite Florencia to give a space also to the virtual participation honoring the hybrid format of the IGF and then we can take reactions from the floor here so Florencia, please, the floor is yours.

Florencia Roveri:
thank you, Valeria just to follow what you are saying when we decided to become the plant into a cooperative it has to do to follow with the intention of providing work inclusion for a group of young people adding that sense to our previous work of digital inclusion and also assuming the challenge that we were facing with the excess of e-waste in our everyday work so we first developed the plant and then following the process of the plant we decided to go on with the project of the cooperative due to the different focus of our work and the complexity of the work of the cooperative, of the plant the plant has to lead with aspects related to production to commercialization and to habilitations and it has a complexity that it aims to be a production unit on itself We also have in Noto Tau an experience of accompanying another cooperative that is working in the treatment of cartridge tons, and they are also following this process of becoming a cooperative, in this case formed by women involved in a work of a gender organization working with issues of violence and situations in which they are involved, in which also the primary aspect is their work inclusion. So in these two experiences, we work with the treatment of technology, assuming the responsibility of dealing with that aspect of the technology, but also with a human aspect related to the work and social inclusion of these groups. In the case of the plant, we also include another aspect that is the social destiny of the equipments that we could recover and repair in the work of the plant. So these experiences invite us to rethink about the use of technology and our work with it, assuming that they are still resources needed for the communities, but the environmental impact in place is needed to be assumed by a diversity of actors. And one of the aspects we found in the work of the cooperatives, in particular the e-waste cooperative management of Tau, is that these responsibilities are not being perceived and are not being assumed. And in this sense, we distinguish aspects related to government responsibilities in the terms of developing plans for integral management of e-waste and the coordination of actors and the regulation and promotion of laws. In Argentina, we do not have a national law, we have a provincial regulation with some aspects that are interesting in terms of, for example, recognising the figure of the manager of e-waste and recognising the social reuse of equipment that is interesting in several aspects that promotes the reuse of computers, for example, and also the responsibility of companies and the private sector in which we can distinguish the responsibility of producers facilitating the disassembly process, aspects with which we deal in the work of the plant, of the management plant, and also the responsibility of companies that generate e-waste in terms as Paz was mentioning previously. In this sense, it is important to visibilise the cost involved in the treatment. We lead locally with a lot of actors that want to value the work of the cooperative of Nonotau, but they assume that the devices they discard are donations. In this sense, it is important to highlight an extended perception of this concept of the devices I do not use anymore or the e-waste the companies generate are donated for social use. There is a slogan in the local campaign that is, don’t donate your waste to me, because in this sense, we are… naming donation, the process that is getting rid of a problem and give it that problem to another actor. So, the idea is relevant dealing with any staff that don’t donate waste, but in the case of technology or discarded technology, there is some difficulties that make it even dangerous. So, that is what we want to mention.

Onsite Moderator:
Thank you so much, Florencia. Yilmaz or Becky, would you like to take some of the questions that Paz brought up? Please go ahead.

Yilmaz Akkoyun:
Yes, thank you so much. Let me first start with an invitation for another session of the German delegation. On Tuesday, on day two, we have an event, Planetary Limits of AI, Governance for Just Digitalization. This is exactly our topic and guests are welcome and I will give a short overview what we contribute in this field and how our approach is towards digital transformation in particular. As being said, together with the European Union, we offer a human-centered digital transformation. For us, this means that we actively shape digitalization by addressing its risks for environment, but also individual human rights and society. We use the term trend transition, how digitalization can help the fight against environmental challenges. We want to actively combat social division, the misuse of data, as well as environmental and climate damage caused by resource consumption and CO2 emissions. Exactly what you mentioned and I really liked your approach towards changing a business model canvas use because I was taking MBA classes in the US and this was exactly what you mentioned and I think education is essential for this but I will come back to it later again. So as Germany we are committed towards ecological, feminist and social digital policy and for us this enables a fair balance of interests based on European standards and universal human rights. We want to ensure that partner countries are integrated into an open, secure and inclusive internet and fair data markets and for that we also need strong local governments and I will also come back to that later. Our digital policy is based on three cornerstones and you mentioned earlier the role of standards and norms. These are essential but also structures. By that I mean DPI, digital public infrastructure and goods but third also promoting digital skills in society and in the economy. I also really like the sentence about don’t donate your waste to me because I think it’s also a question of education for that. But firstly providing structures for human-centered digital services and public goods is vital. Many of our initiatives, to give you a more concrete example because otherwise sometimes international digital policy is very abstract, our initiatives contribute to more democratic and open, fair societies. Our goal is to support the digital self-determination of citizen and partner countries and this requires effective, secure infrastructure that should be based on open and reusable ICD building blocks. We have one initiative, it’s our flagship initiative of the German International Digital Policy, it’s called Gafstag. We develop a global toolbox for reusable open source building blocks for Gafstag. And secondly, to be more concrete on the questions mentioned, we work with partner countries across the globe to promote fair regulation of the digital economy. And one initiative that I want to highlight is the so-called BMZ DTCs, the Digital Transformation Centers. They serve as a local implementation and anchor structure of these efforts. Around the globe, we already have 22, and they are our gate to the local world, to the local communities, the local governments. And further through the initiative Fair Forward, we have worked with governments of, for example, Rwanda, Ghana, and India. And together with Smart Africa, we are involved in developing national AI strategies. And these AI national strategies have a particular focus on the fight against environmental challenges. I think we support also our global partners to realize the potential of AI through local innovation. And here’s where the magic, so to speak, happens. And last but not least, what you mentioned, digital skills. They are one of the cornerstones of BMZ digital policy training of young people in job-related digital skills, but also related to waste management and getting a mindset and a culture in this regard is essential. Therefore, we support the public sector in our daily work, the private sector, civil society, and especially young women. in acquiring the necessary knowledge about digitalization and thus being able to respond to the challenges of digital transformation and to use its potential. And our learning platform is called Attingi which we wrote out via German Development Corporation and our partners and it already reached over 11 million people and I just wanted to stress that next to norms we focus on standards and skills and with these three cornerstones we try to contribute towards working on a digital transformation which really contributes in the fight against environmental damages and challenges and this is our concrete work next to the coordination efforts in the global arena and fora. Thank you so much.

Onsite Moderator:
I think that commitment is very important in order to detect the impact to know that the policy developments and norms will have one in place as Becky was pointing out. It might be difficult then to revert the effects of what policies and norms enable or the result that they produce. So thank you for that. Jaime, let me know if you want to intervene at this point to refer to the questions that Paz brought up otherwise I can open the floor for questions and comments here and also from remote participants. So Jaime, first let me hear from you. You could like to intervene.

Jaime Villareal:
Sure, gracias Valeria. I don’t know if I have anything new to add. I agree with a lot of what has been said so far. I just really want to highlight some of the points that some of our other guests have made. I really agree with Kemblee that we have to push back on this dominant model of thinking about how solutions are made. This dominant capitalist narrative that only excessive… excessively profitable high-yield businesses can guide us through climate change. This kind of ridiculous thinking is what is driving the investment and the promotion of things like artificial intelligence. And we have to remember that artificial intelligence doesn’t exist on its own. It is fueled by our data, our information. It is this rampant accumulation and extraction of our knowledge that makes this possible. And that is built on, that has a very huge physical environmental impact. And it has a real emotional impact, psychological impact on us as a society to operate that way. This kind of thinking, this nonsense is essentially trying to put out a fire with gasoline. We can’t allow this to continue. And I really appreciate these comments about pushing back against the greenwashing of these businesses. These companies who make enormous profits based on these extractive business models of surveilling their users, and then spend just a tiny fraction of that money to create a few exemplary sustainable data center projects. We can’t applaud these things. These public relations stunts, they don’t address the total environmental impact of all of the computing resources that are needed to continue their operations. But also they are essentially, even if they do comply with their promises for reducing their carbon footprints, but they’re essentially doing damage control to a problem that they themselves have created. And we can’t allow this kind of thing to continue. I really agree with supporting a different thinking, different models, supporting community and cooperative based models towards supporting communications and listening to ourselves and taking guidance from our communities in these matters. And in that sense, we’ve been doing, a lot of us have been doing this for a long time. And if we are to be supported, I think it needs to be on our own terms. We need to be trusted to continue. practicing community engagement in the way that we have been doing and organizing ourselves in the way that we have been doing.

Onsite Moderator:
Precisely, that’s part of the, perhaps one of the most important responses to enable that that community engagement and response is possible and feasible. So let me open for reactions, comments, questions from the floor and also from remote participants. If there is anyone here that would like to intervene and to pose a question or a reaction, your interventions are welcome. So let me know what you, you can raise your hand, we can pass you the mic. Please go ahead.

Audience:
Thank you very much Valeria. I think, is the microphone working? Yes, okay, very good. I very much appreciate the session. My name is Peter Brücke. I’m the chairman of the World Summit Awards. And what we do is we focus on best practice examples exactly regarding new and different business models. In this room, our next session is on hacking digital divides. And you will see Alloy, for instance, which is a micro financing solution for small and micro businesses. You will see Social Lab from Chile, which shows actually, I think you will love this very much, a business model based on love. And they have 600 different kind of companies there. Then we have people from Lebanon which show how social volunteers work. And it’s something very interesting because the digital transformation centers work with us regarding promoting these examples. We were just in Mexico and Puebla and doing this. One of the key things, however, is for the session, which I think is really so important, is to talk about the technology frameworks. and the alternative of technology frameworks. And I think what Jaime was saying is really very much to the point, but he is giving, Jaime, you are giving us a critique, but you’re not showing us what would be possible solutions. So, for instance, one of the things which I think is very much important when you’re looking at large language models and machine learning and how you train them, that very few people, I have not seen any government talk about how to tax the AI companies for how they train their models with the data. So they are not paying for the data which they use to train. They are not respecting copyright in terms of when they train it. They are not giving anything back. So one of the things which I found very interesting when you talk about the social, ecological, and feminist policy in terms of cooperation, does the BMZ have actually a clear understanding that you need to go into the economics of how to make AI smarter? And what would it be actually in terms of a cooperative model for that aspect? Because then we are really addressing, Valeria, the issue of this session, which is alternative technology frameworks. We need to see, and I think Jaime is very clear on that, is that we have very much a hidden extraction situation, exploitation situation, but it is not being recognized as such, and therefore we are not even using market models, which means making them pay for the smartness of their models. And we are not moving to this, although we have with the German government one of the key players in that industry, I mean, global play. So my question is very much to Ilmas on this issue. Have you thought about it yourself? Is there anything in terms of the policy development? And then Valeria, I would be very happy to engage more with APC in terms of finding good examples of where you are not creating a parallel economy, but where you are basically seeing, OK, how can we transform the economy and do that in a different way? So that would be my little five cents.

Onsite Moderator:
Thank you. Thank you so much for your contribution. Is there any other contribution from the floor or pass? Let me know if there are remote participants who would like to intervene or present questions for the panel. No. Not yet here, yeah. OK, so obviously the panel is invited to react, to comment, to respond to what has been said. Any one of you could like to take that on, please?

Yilmaz Akkoyun:
Yeah, Peter, thank you so much. It’s a very interesting question. Let me first say one sentence to our further engagement in this regard, and then get back to your answer. The BEAMSET’s digital initiatives provide knowledge about regulation, setting standards, I said, in order to promote our goals for fair digital transition in partner countries. And one initiative which I didn’t mention but is really fitting in this context is the BEAMSET initiative, Fair Forward, which contributes towards the development of open AI training data sets in Kiswahili and Luganda, inter alia, languages spoken by more than 150 million people collectively. Further through the initiative Fair Forward, we have worked with governments Rwanda, Ghana, and India, I mentioned, and how they contribute for green tech solutions. We see open access to AI training in data and research. as well as open-source AI models as a foundation for local innovation. On Tuesday in our session our partners from the Mozilla Foundation are also here. The economics of our engagement is super interesting. The aspects that you mentioned I’m a fan of that approach actually as a studied economist but I think for now we have a different approach and I will get that question take that back to Berlin and discuss with our colleagues which are operationalizing doing that program and programs are also developing and changing I would say generally but at the end for me it’s important what the outcome and the impact of these programs are and how they can contribute towards local solutions and transforming the local population and the economics of the local development is essential but I also think it’s a global question which we need to discuss and taking it from there I would like to engage with you beyond this panel and discussion and let’s discuss also after the IGF please and maybe other colleagues on the panel have also contributed.

Onsite Moderator:
Thank you. The cooperatives that are present here obviously they have been also thinking and implementing different type of solutions and providing responses so let me just check if there are other questions or comments here Yes, please go ahead

Audience:
Hello, I’m Daichi Sakamoto and Dova Corporation is a private company in Japan So, very interesting discussion. And then, so, I’m working on the, here, so, IT industry in Japan. And then, in Japan, there is a word that digital dokata. Digital dokata, dokata means the construction worker. So, it’s just a digital worker, but working like as the construction worker means, so, very small work. And then, so, accumulation of the small workers, small work, will be big, so, make big building. This is the Japanese, so, industry culture. So, but, if the AI come in here, and then, maybe, this mindset will be changed, but I think it will be moderate, because, so, it’s very big change. And then, so, this kind of the layer structure, it’s the small worker will be used by the readers, and then readers, reader use, and then, this kind of the pyramid structure is existing in the industry. So, maybe, if we think about this digital transformation, this digital transformation, we need to take care each, all of layers. So, how to transformation, how to transform each layers. So, at that time, maybe, the AI will be, so, violate or disrupt this, the layers, and then, maybe, new style or new business model will come. So, in fact, so, I heard several, I offered several works of the new era, or new business model. That means the new work is training AI. So training AI means to make a data of the conversation. It’s very easy. And maybe anyone can speak Japanese, can train the AI. So this is just a new style work. But there is also the big gap of the current work and the previous work. So maybe this kind of the gap of the new business model and the current model will be the problem in the industry. I felt that in this discussion.

Onsite Moderator:
Thank you very much. I would like to finally invite the panel and our remote speakers as well to just share some final remarks with some recommendations or demands that you might have to different stakeholders, including governments, of course. If you want to dig a little bit more into the point that was brought up about solutions and responses, so you are welcome to do so. So I would like to start with Becky. If you could like to share some final comments with the audience, please.

Becky Kazansky:
Yeah, thank you so much. I’ll keep my final comment very short, just to say that one important thing that is required in order to support the kinds of solutions that already exist and pathways that already exist within a just transition, and that includes examples brought forward today by panelists both remotely and in the room. So to support those, we also have to challenge what the climate justice movement for decades has called out as false and misleading climate solutions. So that includes pushing for policy that can address greenwashing. and also pushing for strong regulation around speculative and dangerous climate technologies. Some of these technologies are not always part of the digitalization discussion, for example, solar geoengineering, carbon credits, but they are technologies that are being invested into heavily by big tech companies as part of plans to be able to continue the profit models that they rely on. So that’s why it’s important for the audience of the IGF to also begin to engage around these kinds of technologies as well and see them as part of the same discussion. So thank you so much.

Onsite Moderator:
Thank you so much. So let me go to Florencia. Florencia, you would like to share your final comments and demands or recommendations?

Florencia Roveri:
Yes, thank you. Just to highlight again the aspect about responsibilities related to each holder, in our focus, e-waste is a very complex problem and the interrelation of holders and of actors is a challenge and also maybe it would need to be involved in the idea that it should be a public service to assume this problem and also the fact of the profitability of these actions. We work in a very small experience, but it’s a huge problem affecting really deeply environment and also it’s a problem that it has a very big potential in work generation. possibilities and opportunities for people and also for addressing the digital divide also. So, thank you.

Onsite Moderator:
Thank you, Florencia. Let me go to Jilmes for your final comments, please.

Yilmaz Akkoyun:
Thank you so much. This was super interesting and an honor to be here. BMZ contributes to various political responses, process and fora. Our key question is how can we reinforce efforts to bring local perspectives and national perspectives from the global south in the international arena. And we have a wide growing international network in working relations with a number of governments, but also especially civil society actors and other stakeholders, which we use in favor of more digital cooperation. On an international level, the German government especially supports the Global Digital Compact, which was mentioned before. And we also actively engage in discussions on in the G7, G20 context and multi-stakeholder initiatives such as Gafstech that I mentioned and the Digital for Development Hub of the European Union. Global digital cooperation is essential for us to support a holistic approach to the digital transformation, but not also for its opportunities, but also for its risks. I personally think we must foster close cooperation on a large scale in order to advance social and a sustainable digital transition around the globe. This is why we are here. Let’s stay in touch. This was super helpful. Thank you so much for your perspectives. I think with sharing these formats, we are stronger and can build a digital world where we can achieve our goals together.

Onsite Moderator:
Thank you so much. Jaime You have the floor, please. Thank you.

Jaime Villareal:
I agree with the gentleman who called out these large companies for what essentially is criminal behavior of using our data to train these language models at no cost of their own. And while I agree that we need to hold these companies and corporations accountable for these actions, the idea of allowing them to pay a fine, to pay a tax, I have strong questions about this. How is this different than the system of carbon credits? And how is this different than a shell game that allows them to do wrong and pay for it later, right? With the enormous profits they’re able to make from that. And likewise, this idea that how these questions, I think they’re very interesting questions about how this changes the role of the worker and what participation we can have as workers in training AI models. But I think it’s important to remember also that there’s choosing to be a worker and then there are ways that we will be forced to be workers and we will be forced to train AI. We will be forced just to have access to technology through tiny widgets that are presented to us, to puzzles that we have to solve, through any kind of information that we have to give to the AI. We will have no choice in training these models. And what do we do about that? Where we are exploited and we are not even treated as real workers and we are essentially serfs within this wider system. I think that, of course, supporting local initiatives, of course, supporting indigenous languages and their preservation is tremendously important, but I don’t believe that we can apply a single model to all cases. And I think it’s very important to ask ourselves, are we asking, are we listening to these communities directly? Is this what they want? And maybe there are cases where they are interested in experimenting and having access to these technologies, but I don’t think we can apply this as a single solution across the board to everywhere, that this is the way to stimulate local… local preservation of languages and indigenous cultures, I think we have to ask and everywhere, there has to be a proper consultation with local communities whether this is something they are actually interested in.

Onsite Moderator:
Absolutely, and the global community has a role to play in ensuring that those perspectives and the ones impacted in reality are brought to these conversations because as Jaime is pointing out, there is not a single solution that fits everyone. So hearing from the ones that are impacted and the realities and particularities is very important and for that cooperation and also the commitment of all the stakeholders to make sure that those voices are heard because there is a voice. The problem is that they are not welcome or not heard in different spaces. So I think that’s very necessary and one of the needed actions in order to change the paradigm. So let me close the panel with Kem Lee. Kem Lee, your final remarks and then to thanking you all for the presence and the comments and for joining the conversation about this key issue. Thank you very much.

Kemly Camacho:
Thank you, Mr. Brook for your intervention that made me think a lot, yes. But I have the same reaction than Jaime, yes. Then if you pay, you can do it, yes. And I would like to also integrate and discuss with my colleagues with a feminist analysis what you are proposing. Yes, especially because of that, because if we charge the machine learning, the training machine learning is not a way to accept that they can do that if they charge. And also because in the feminist analysis, technology frameworks has to be very related to solve the concrete problem in the context where we women live. Then technology framework for us. yes, have to be related with this context around us, the care of our children, the care of our community, then our technology framework, we prioritize these technology frameworks. Then this is one thing. The second thing about the job and the fair job and the precariousness of work and how that is transforming and how all what we win as workers are transforming and this is another discussion. And I think this business model, collective in the model, cooperatives and all of that have in the center a fair job and especially a fair job for women. Then it’s totally connected. If we have fair jobs, we can survive as humanity in this world. If we have precarious jobs or work, we are not going to survive as humanity for sure. Then just to say thank you very much. I think this is a conversation to follow and go in depth and discover and explore. Thank you.

Onsite Moderator:
Thank you very much again for your openness and let’s continue the conversation in the different spaces here at the IGF. So thank you very much. Thank you. Thank you. Thank you. Thank you very much. Thank you. Thank you.

Audience

Speech speed

148 words per minute

Speech length

972 words

Speech time

395 secs

Becky Kazansky

Speech speed

149 words per minute

Speech length

1576 words

Speech time

634 secs

Florencia Roveri

Speech speed

110 words per minute

Speech length

1230 words

Speech time

670 secs

Jaime Villareal

Speech speed

165 words per minute

Speech length

1479 words

Speech time

538 secs

Kemly Camacho

Speech speed

128 words per minute

Speech length

1462 words

Speech time

685 secs

Online Moderator

Speech speed

142 words per minute

Speech length

531 words

Speech time

225 secs

Onsite Moderator

Speech speed

162 words per minute

Speech length

1591 words

Speech time

589 secs

Yilmaz Akkoyun

Speech speed

151 words per minute

Speech length

2192 words

Speech time

873 secs

How to build trust in user-centric digital public services | IGF 2023 Day 0 Event #193

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis examines the incorporation of artificial intelligence (AI) and digital services in government decision-making processes, providing a comprehensive overview. One key aspect highlighted is the significance of human intervention in AI-driven systems to foster trust among citizens. AI has the potential to enhance the efficiency of government systems, which are rule-based and easily automated. However, human involvement is essential to address potential biases or errors introduced by AI.

The analysis also addresses concerns regarding the exclusion of non-citizens, migrants, and workers from other countries in digital public services. This exclusion may result from the lack of personal identifiers, such as an Aadhaar number in India, which can limit access. To avoid exacerbating existing inequalities, it emphasizes the importance of inclusivity in the development and implementation of digital services.

Furthermore, the analysis raises a crucial concern about digital sovereignty in the context of cloud computing. It notes that many governmental services are shifting to the cloud, and most countries rely on foreign cloud infrastructures. This dependence raises concerns about data breaches, loss of control, and vulnerability to foreign interference. The analysis advocates for caution in heavily relying on foreign cloud infrastructures and calls for strategies to ensure digital sovereignty in the age of cloud computing.

Privacy and data security are also significant considerations in AI implementation. The analysis highlights the need to prevent AI from disclosing critical information gathered and analyzed from the cloud or internet. It emphasizes implementing measures to limit what AI publicly discloses and exercising caution in determining AI’s access to data to protect sensitive information and maintain privacy.

In summary, the analysis emphasizes the need for careful consideration when implementing AI and digital services in government decision-making processes. It argues for human intervention to build trust, inclusivity in digital services, concerns about digital sovereignty in cloud computing, and securing critical information from AI disclosure. These points promote responsible and mindful adoption of AI and digital technologies in the public sector, creating a more equitable, sovereign, and secure environment.

Gautham Ravichander

Building trust in digital government is a significant challenge that hinges on the delivery of reliable, transparent services that work consistently. To foster trust, efficient service delivery, transparency, and data privacy are key factors. Timeliness and clarity in service provision play a crucial role in increasing trust. Providing granular information to citizens is also important, as it empowers them and enhances transparency. Additionally, reforming processes and minimizing data collection help build trust by adhering to the principle of purpose limitation.

Furthermore, trust in digital government can be strengthened by prioritising the trustworthiness of Artificial Intelligence (AI) systems over their efficiency. While rule-bound AI systems are more easily translated into algorithms, the presence of human involvement remains important for the comfort of both citizens and government employees. Ensuring that humans are part of the decision-making loop helps instill trust in the AI systems. This highlights the necessity of human oversight and accountability when employing AI in government operations.

Cloud computing is recognised as a cost-effective and efficient solution for managing large data and resources compared to maintaining physical servers. Countries like India and Germany have adopted similar approaches to cloud computing, recognizing the benefits it offers. The costs associated with maintaining physical infrastructure often outweigh the expenses of utilizing cloud services. Therefore, embracing cloud computing can lead to better resource management and cost savings for governments.

In terms of cybersecurity, breaches in government systems are frequently the result of poor communication and lack of training, rather than sophisticated hacking activities. Approximately 50% of breaches occur due to accidental information release, highlighting the importance of effective communication and comprehensive training programs to minimize such incidents. Addressing these issues can help governments strengthen their cybersecurity protocols and protect sensitive data more effectively.

In conclusion, building trust in digital government necessitates the delivery of reliable and transparent services, as well as an emphasis on data privacy. The integration of physical and digital interactions, known as ‘phygital’, is crucial for the success of digital government globally. Additionally, prioritising the trustworthiness of AI systems and embracing cloud computing can contribute to more efficient and cost-effective government operations. Effective communication and robust training programs are also vital to mitigate cybersecurity breaches and protect sensitive information. By addressing these key areas, governments can foster trust and confidence among citizens in their digital services and operations.

Sascha Michael Nies

The panel discussed the significance of cybersecurity in establishing trust in digital government services. They explored various aspects such as user-friendliness, ease of access, and reliability. The unanimous agreement was that cybersecurity plays a crucial role in fostering trust in these services.

The panel stressed that user-friendliness alone is insufficient to instill confidence in digital government services. While a user-friendly interface is important and enhances the overall user experience, it is equally important to ensure the platform’s security against cyber threats. Without strong cybersecurity measures, users may hesitate to engage with these services, despite their user-friendly nature.

Additionally, the ease of access to digital government services is closely linked to cybersecurity. Users must have assurance that their personal information and data are protected when accessing these services. The panel highlighted that a cybersecurity breach can not only compromise user data but also erode trust in these services, leading to a decrease in willingness to participate.

The panel also discussed the reliability of digital government services in relation to cybersecurity. Users need to trust that these services are dependable and their data will remain secure. A robust cybersecurity framework ensures the integrity and availability of these services, mitigating potential threats or disruptions. Without a reliable system in place, users may be discouraged from utilizing digital government services and may revert to traditional methods.

In conclusion, the panel unanimously agreed that cybersecurity is a critical component of digital government services and a key factor in establishing trust. It encompasses factors such as user-friendliness, ease of access, and reliability. Strong cybersecurity measures are essential for fostering confidence, protecting user data, and maintaining the integrity of digital government services.

Moderator – Christopher Newman

The analysis provides a comprehensive overview of three key aspects of digital government strategies. Firstly, in Brazil, inclusion and accessibility are given utmost importance. The government has actively sought the feedback of over 3,000 individuals to promote these objectives. This commitment to inclusivity is further bolstered through the encouragement of effective communication and the use of user-friendly design systems by public administrations. By prioritising these measures, the Brazilian digital government strategy aims to ensure that all citizens can engage with and benefit from government services.

The second point revolves around the need to build trust in the application of artificial intelligence (AI) within public administration. As AI technology becomes more prevalent, citizens may find themselves faced with decisions that are made by an AI. Therefore, establishing trust in the use of AI is crucial. The analysis suggests that this trust can be cultivated by focusing on transparency and open communication. Public administrations must clearly communicate how AI is being used and ensure that there is a clear understanding of how decisions are made. By doing so, trust can be fostered, ensuring that citizens have confidence in the use of AI within public administration.

The third important aspect emphasized in the analysis is the significance of clear communication about data usage. The acceptance of citizens is vital in this regard. When acquiring data from citizens, it is essential to communicate how that data will be used. This transparency not only helps in building trust but also promotes openness and accountability. By clearly articulating data usage policies, governments can establish a sense of transparency, which is crucial for fostering trust among citizens.

Overall, the analysis underscores the critical role of inclusion, trust in AI, and clear communication in digital government strategies. By prioritising these factors, governments can create more inclusive and accessible systems, build trust in the use of AI, and establish transparency and accountability when it comes to data usage. These measures are crucial for ensuring that digital government strategies effectively serve the needs and interests of all citizens.

Rudolf Gridl

Digital services that are user-friendly and reliable are essential in building trust among users. Services need to be convenient, effective, and accessible at any time and anywhere. Research has shown that if services are not user-friendly, people will not use them, even if they are secure and data-protective. Striking a balance between user-friendliness and data protection/security is crucial. While user-friendly and customer experience can sometimes compromise data protection and security, there must be a trade-off between these aspects to foster trust and encourage the use of digital government services.

Robust data governance frameworks are vital in building trust in digital public services. In the case of Germany, data protection is taken seriously, with a long-standing tradition of protecting personal information. The country even features a constitutional right for informational self-determination. Interestingly, trust in commercial entities is often higher compared to trust in the state when it comes to data protection. This highlights the importance of having strong data governance measures in place to ensure transparency and accountability in handling personal data.

However, data protection concerns can hinder the implementation of digital services. For instance, the introduction of a digitally exclusive nationwide public transport ticket in Germany faced controversy due to data protection concerns. This emphasizes the importance of addressing these concerns and developing solutions that address the privacy and security of users’ data.

Despite these concerns, once citizens experience the convenience and benefits of digital services, they tend to accept and appreciate them. This was seen in the case of the digitally exclusive public ticket service, which was widely received positively by citizens for its convenience. This highlights the need for effective communication and education campaigns to address any initial apprehensions and build trust in digital services.

Involving civil servants in the AI-driven process is crucial for a holistic AI-driven government. By empowering civil servants and ensuring they are part of the decision-making process, governments can better incorporate AI technologies while maintaining human oversight and accountability. This helps build trust and confidence in the use of AI in public administration.

The German Government is actively working on a solution for cloud and cybersecurity. They are pursuing a two-track approach, which involves building the federal German cloud and modifying international cloud systems to act as sovereign clouds for Germany. The goal is to create a user-friendly and highly protected system that meets the country’s cybersecurity needs.

Overall, user-friendly and reliable digital services, along with robust data governance frameworks and effective cybersecurity measures, are essential for building trust in digital government services. Striking a balance between user-friendliness and data protection/security, involving civil servants in the decision-making process, and effectively communicating the benefits of digital services are crucial steps towards fostering trust and acceptance among users.

Valeriya Ionan

The analysis focuses on the topics of trust and digital transformation. Trust is described as the confidence in the actions of stakeholders, specifically the appropriateness of their actions without the need for constant confirmation. Institutional trust is highlighted as being of great importance.

Security is identified as a fundamental requirement for trust. The report then goes on to discuss the digital transformation efforts in Ukraine. It is mentioned that Ukraine is the first country to have digital passports that are completely equivalent to traditional paper or plastic passports. This achievement is seen as a major milestone in the world of digital transformation.

The analysis draws attention to the Diya app, which has been widely embraced by Ukrainians. The app offers a range of services, including document storage, fine and tax payments, and has become a trusted solution for millions of users. This has significantly contributed to public trust in digital services.

The Diya ecosystem is highlighted as a comprehensive platform that encompasses multiple aspects of public services, such as business registration, IT industry support, SME development, and education. It is evident that the Ukrainian government has invested heavily in creating a robust digital infrastructure to support its citizens and promote digital transformation.

The report emphasizes the importance of maintaining continuous communication with citizens about the benefits and significance of digital transformation. It is crucial for the government to involve citizens in the development of new services and to regularly communicate the advantages of digital transformation, including its role in promoting transparency and fighting corruption.

The analysis also highlights international cooperation on AI regulation, which is expected to simplify collaboration with European partners and attract investments. Ukraine is set to assess the impact of technology on human rights and sign voluntary codes of conduct for AI, demonstrating its commitment to responsible AI development.

Data privacy and security are identified as key concerns during the digital transformation process. The DEA system in Ukraine is praised for its approach of connecting directly to highly secure state registers without storing personal data. Regular communication from the government to citizens about digital transformation and privacy is considered crucial.

Digital literacy and accessibility are other important factors discussed in the analysis. The report stresses the need for digital literacy programs to be accessible to everyone, including those without gadgets or internet access, as well as elderly individuals. Digital hubs have been created in Ukraine to facilitate digital literacy efforts.

Offline centres for public services are still available in Ukraine, catering to those who prefer not to use digital services. This is seen as an important consideration to ensure inclusivity and cater to a diverse range of user preferences.

Overall, the analysis highlights the importance of trust in the context of digital transformation and underscores the efforts made by Ukraine to foster public trust in digital services. It also underscores the need for continuous communication, collaboration, and a strong focus on security and privacy to ensure the successful implementation of digital transformation efforts.

Luanna Roncaratti

In Brazil, the biggest challenge in public service delivery is the existing siloed and fragmented model. This traditional bureaucratic model, based on how the government is organised rather than what people deserve and demand, hinders the efficient provision of services. The overall sentiment towards this issue is negative.

To address this challenge, the country has been investing in centralised tools and platforms to move towards a whole-of-government approach. This positive development aims to integrate thousands of services by leveraging a single-window portal called GovBR and the National Digital ID. The interoperability platform, however, requires further work to fully achieve its objectives. The sentiment towards this argument is positive.

Brazil’s digital government strategy is built on international experiences and recommendations from the OECD. It focuses on citizen-centricity, aiming to provide an easy and simple way for citizens to interact with the government. Extensive user research has been conducted, with over 150 projects and feedback from more than 3,000 people. This research has helped in the development of initiatives and solutions. The sentiment towards this argument is also positive.

Another important aspect highlighted is the need for plain and simple language in digital tools. Many difficulties faced by people are related to communication rather than technological tools. By improving communication through clear and understandable language, the overall experience can be enhanced. The sentiment towards this argument is positive.

Brazil has demonstrated its commitment to digital inclusion and accessibility through various initiatives. For example, an automatic translation tool for sign language called Vilibras has been introduced, making over 100,000 translations daily on Brazilian governmental web pages. Additionally, a design system has been defined for visual communication, offering a unique experience. A quality lab and model for digital services improvement and evolution have also been launched. Furthermore, an API for user feedback and satisfaction assessment is provided. The sentiment towards this argument is positive.

In the context of AI usage, it is crucial to prioritise transparency to build trust. Users should be informed when AI is being used and how it is working. This transparency helps prevent potential biases and discrimination. The sentiment towards this argument is positive.

However, the analysis also highlights the potential risks related to AI decisions. Cultural information embedded in AI algorithms can lead to discrimination, biases, and prejudice. To address this, users affected by the decisions should have the right to request a review of the provided solution. The sentiment towards this argument is negative.

Data protection and the secure construction of AI systems are also important concerns. AI learning can make data more attractive to hackers and susceptible to data leaks. To mitigate these risks, secure and robust AI systems must be built. The sentiment towards this argument is neutral.

Effective governance plays a crucial role in responsible AI usage. Risk analysis, constant algorithm reviews, and data quality analysis are essential actions to prevent problems related to AI and data misuse. The sentiment towards this argument is positive.

Ensuring data interoperability while maintaining its security is another noteworthy observation. Luanna Roncaratti’s organisation focuses on preparing and strengthening the resilience and capacity of different public institutions to protect their data. Instead of storing data, the organisation aims to make different data sets interoperable. The sentiment towards this argument is neutral.

Lastly, Luanna Roncaratti advocates for providing physical responses to people demanding public services, even without any documents. As an example, Brazil’s public health system offers services to any person arriving without any documents. This approach emphasises the importance of inclusivity and access to public services. The sentiment towards this argument is positive.

In conclusion, Brazil’s public service delivery faces challenges due to a siloed and fragmented model. However, efforts are being made to overcome these challenges by investing in centralised tools and platforms, conducting user research, prioritising citizen-centricity, improving communication, and promoting digital inclusion and accessibility. Transparency, responsible AI usage, and data protection are important considerations in the country’s digital governance strategy. Additionally, offering physical responses to people demanding public services without any documents underscores the commitment to inclusivity. These efforts collectively aim to enhance public service delivery and meet the needs and expectations of the people in Brazil.

Session transcript

Moderator – Christopher Newman:
Welcome to today’s session. Good afternoon, everyone. Very warm welcome to the session, how to build trust in user-centric digital public services. We’re happy that you’re joining us here today for this Day Zero event to kick off the Internet Governance Forum 2023. For all those people joining, please come in, find a seat. My name is Christopher Newman. I’m an advisor at the German Agency for International Cooperation, GIZ, working in the field of digital governance, and I will be your on-site moderator today. A brief note on housekeeping and what we plan to cover in the next hour or so. Our session is being held in hybrid format, as you can see, and will be a roundtable discussion followed by an open question and answer session. After hearing from our panelists, two of whom are here at the top of the table, two of whom are joining us virtually. We encourage the audience, that’s all of you here in this room and all those joining from around the world, to get involved in the discussion. For all participants joining us via Zoom, please keep your microphones muted for the duration of the session. I believe your microphones are automatically muted, so that makes things easier. And you are encouraged to post questions to our panelists in the chat at any time. So if you have a question burning to get off your chest, please feel free to post it in the chat, and we will pick it up in the Q&A. This session is organized by the German Federal Ministry for Digital and Transport, together with GIZ. The German Ministry for Federal, Federal Ministry for Digital and Transport engages in digital dialogues with several key countries, partner countries around the world, to shape better framework conditions for the digital transformation of our governments, economies, and societies. As a multi-stakeholder initiative, the digital dialogues provide a platform for direct exchange between policymakers, regulators, businesses, and civil society. The goal of this session here today is to share lessons in implementing trustworthy and user-centric digital public services, and to explore the role of data governance and AI in building trust. Now before we jump in and I hand over to our moderators, sorry, to our speakers, a few words on what we’re going to talk about here today. In today’s digital era, citizens increasingly expect government services to be convenient and easily accessible across channels, devices, and platforms. They have the potential to meet citizens’ demands and be more responsive, improve service delivery, and transform how citizens are engaging with their governments. Underpinning the success of these new digital public services is the aspect of trust. Citizens must feel confident that their personal data is handled responsibly, and that digital public services are reliable and secure. This then in turn raises important questions around what data governance frameworks must be put in place, how to drive the adoption of services through user-centered design, and how AI can be leveraged responsibly to unlock possibilities for automation and personalization in a way that boosts efficiency while also maintaining trust. To help unpack some of these complex issues, we have a panel of four esteemed speakers with a wealth of experience on this topic, representing four different country perspectives. I would like to introduce them to you. First off, here in the room, we have Dr. Rudolf Griedl, Director General of the Central Department at the German Federal Ministry for Digital and Transport. In this role, he’s responsible for advancing the digitalization in his administration. His ministry also coordinates across the government on Germany’s digital and data strategies. Previously, he headed the Department of International Digital Policy at the German Federal Ministry for Economic Affairs and Energy. Next, moving to the online world, we are happy to have joining us virtually from Kyiv, Valeria Yonan. Valeria is the Deputy Minister at the Ministry of Digital Transformation in Ukraine, where she oversees Ukraine’s National Digital Literacy Program, development and growth of SMEs and entrepreneurship, regional digital transformation, as well as Euro integration and international relations. Back in this room, we have Dr. Luana Roncaracci, I hope I pronounced that okay, who serves as Deputy Secretary of Digital Government at the Brazilian Ministry of Management and Innovation in Public Services. Luana is responsible for coordinating the digital transformation of the federal administration, as well as developing Brazil’s national strategy of digital government in cooperation with states and municipalities. Last but not least, online, we have joining us Gautam Ravichander, who is Head of Strategy at the eGov Foundation in India. Over the past 20 years, the eGov Foundation has developed and implemented digital solutions for city and state administrations across India to develop accessible, affordable, and inclusive e-services. Gautam previously led eGov’s policy initiatives with the government of India and partner states. Welcome to you all. Now, without further ado, let’s jump straight into our discussion. We will start off with a lightning round, and I would like to ask each participant to briefly, in one minute or so, share your thoughts on the following question. What do you see as the biggest challenge in building trust in digital government? And please stick to the time allotted, so I don’t have to be rude and cut you off. And we’re going to start with Gautam. Please, the floor is yours.

Gautham Ravichander:
Thank you, Christopher. So what I’m going to say is going to sound a little simple, but it has to work. It has to work reliably, transparently, on time, every time. That is unfortunately not the experience of many people in many parts of the world in much of recent history, right? So it’s not just citizens who need to see this working, it’s even government leaders and government officials who need to see these systems working. They have to believe that these systems work, they deliver transparency, they deliver services, they deliver benefits, and they make life easier for everyone involved. And they do not impinge on sovereignty, otherwise they will not even initiate such efforts, especially in much of the developing world. The other element is in much of the world, it’s not really going to be pure digital government. You’re going to have what we call digital government, which is a portmanteau of physical and digital. We need humans in the loop, people who will actually work with citizens on the ground because they have trust face-to-face, and enable them to access the digital world. So I think this is going to be important, making sure that the seamless experience of digital government is something that everybody experiences for trust to start building.

Moderator – Christopher Newman:
Gautam, thank you very much. I learned a new word there, fidgetal. I’m not sure about you. I’ve heard of phablet for the phone-slash-tablet, but fidgetal is a good word in integrating digital and physical. I’d like to hand the word over to Dr. Griedl. What do you see as the biggest challenge of building trust in digital government?

Rudolf Gridl:
Actually, thank you very much, Christopher, and welcome to everybody. From my side, actually, much in the same direction. I think the services have to be user-friendly and reliable at the same time. This is sometimes a challenge. The more we get into user-friendliness and customer experience, the less sometimes we have to be able to respect data protection and security issues. So there has to be a trade-off, but for the people to, first of all, use these services, they have to work. They have to be convenient. They have to be in place every time, everywhere. This is something that we are experiencing in Germany. If this is not the case, you can build a very secure and a very data-protective framework. People won’t use it. So I think that’s the most important challenge, and my minute is over.

Moderator – Christopher Newman:
Thank you very much, Dr. Griedl. Valeria, over to you.

Valeriya Ionan:
Good afternoon, dear ladies and gentlemen. I would like to dig a little bit deeper. So I would start with another question, and what is trust, and what does it mean to trust, and when trust happens by default? Can it really happen by default? And I think you will agree with me that this is the complex question for just one minute. So Chris, please don’t be rude to me, but probably I will need 30 seconds more. So however, I like one of the definitions that trust is confidence in the appropriateness of actions of a certain stakeholder without a need of actualizing such confidence on a regular basis. And this is great definitions to my mind, leads us to some very important conclusions. First of all, institutional trust is very important. Secondly, therefore, one of the basic requirements of trust is security. And thirdly, when it comes to digital government, sometimes there might be no correlation between electronic transparency and trust in government. So what to do about it? I think we have a lot to discuss during today’s session. Thank you.

Moderator – Christopher Newman:
Thank you very much, Valeria. And thank you for sticking to the time as well. Finally, to round us off in the lightning round, Luana.

Luanna Roncaratti:
Well, good afternoon, ladies and gentlemen. Thank you for having me. And I’m going directly to the point, well, from our point of view, we believe that one of the biggest challenges that we have is the siloed and fragmented model of providing public services. I think it impacts a lot the way how service delivery is done today. And it comes from the traditional bureaucratic model that it used to be defined by the way government is organized and not in the way people deserve and demand services, public services. And by investing in centralized tools and platforms, we are trying to advance towards a whole of government approach. And we’ve been discussing and defining and providing tools such as the Single Window Portal, GovBR, the National Digital ID. We’ve hundreds of millions of people that already have the .br account, the digital signature, and mainly also that we have a lot to do, a lot of work ahead of us to do, that is the interoperability platform and that the idea is to integrate thousands of services as well. And I finished my time, sorry.

Moderator – Christopher Newman:
Thank you very much, Luana. So what do we take away from the lightning round? We heard a definition of trust and the complexity of what does it even mean to have trust in these digital public services. We heard the issue of path dependency of how public services were provided in the past, the fragmentation and silo that make it difficult to shift to a digital mindset. And we heard about output legitimacy that it has to work, first and foremost, and each of us as citizens of our respective jurisdictions have experienced that, that it feels good when things work. Now, let us dive into more depth and I would like to now hear perspectives from the panelists on a few different aspects of this, of trust in digital public services. Gautam, starting with you, India has created a tech stack for the entire country of 1.4 billion people and your organization supports governments in building platforms for better service delivery. What have you learned from working with digitalization with various levels of government in India and what factors matter most in fostering trust between governments and citizens?

Gautham Ravichander:
Sure. Thanks for that, Christopher. I think I’m just going to go back quickly to recap my previous answer, it has to work reliably. Now, this is not the software alone, right? It’s the government and the whole process of delivering services and benefits to citizens. So for this, we have to really focus on capacity. Capacity can mean many things, right? At the front line, you know, field employees get the information that they need and they are able to perform the tasks they have to do in a very time-bound manner. Administrators can manage their resources, human, financial, and the performance of these resources to address the issues that they’re coming up with. And ideally speaking, they should be able to spot and preempt crises before they happen. Policy makers should be able to track progress on goals and use the system to have greater confidence that the policy as intended is actually going to translate into execution on the ground. Somewhere in the midst of all of this, you need someone who’s able to actually deploy and manage systems, right? Now, on this, I will say that when those capacities for technology development and maintenance are not within government, that can be contracted in and partnered with. But you cannot get away from the capacity needs at the field levels, the administrator levels, and the policymaker levels. So focusing on building those capacities, especially at the local government level, is going to be important because that’s the interface between human beings and the government itself, right? The second thing really comes down to focusing on making and keeping promises, right? SLA is a promise that I will deliver X service. It could be something as simple as applying for a trade license or running a shop, and I will get it in Y time, and it will happen without any issues or with a certain amount of quality. All levels that I described in the previous part have to align to make this happen. So when we are defining these timelines, when we are defining these promises as governments to citizens, we have to ensure that they are promises that can be kept, they are realistic. So there’s no point promising that a road will get fixed overnight if the local government does not have the financial resources and the manpower to ensure those things can happen. This also needs to be paired with the need of transparency. So as a citizen, can you see the status of any request you’ve made, where it is sitting? Is it delayed? Is it auto-escalated? Can you request escalations? Are you able to get into the details of what is happening without having to walk into a government office? That clarity actually is important, more than just the timeliness, just transparency so that I know where my files are being processed, what’s going on with my application, goes a long way to increasing trust. Otherwise, typically we are all used to a non-functional government system, really looking at sites that say it’s in process and that’s it. We don’t know anything else and we don’t know how long it will take. So focus on giving more granular information to citizens. This third piece really comes back to focusing on security and privacy. Now this is a digital panel. There is always a lot of conversation about technology, encryption mattering, things like privacy by design. But a lot of the real gains have really come from process performance. So for example, a field engineer who is servicing a water connection request does not need to know every single piece of information about the person who has made that request. They just need to know the information required to perform their function. So re-educating them and providing them with that information in a way that they can deliver that service to the citizen, as well as ensuring that on the back end, the various pieces of information that are required to provide that service, for example, verifying your identity, collecting your payments, possibly even verifying your property records, can be done digitally without having to constantly rely on human beings passing the files. What does this mean? It doesn’t mean that we send files as PDFs. It means that you can query systems through APIs and if somebody says, hey, I’m Gautam Ravi Chandra and I live in place X, that is something that an API can verify and it will go back and say, yes, Gautam is who he says he is. And by the way, he does stay at the place he says he stays at, so you can go ahead and provide him that water connection. So in that way, to a certain manner, you’ll automatically start building in purpose limitation by reforming processes and minimizing data collection straight into workflows. I’ll pause there. Thank you.

Moderator – Christopher Newman:
Thank you very much, Gautam. Very important fundamental points you raised there. Returning here to the physical aspect of the digital space we are moving in, Dr. Greedl, Germany is known for being a champion of data protection and data privacy. What role do robust data governance frameworks play in building trust in digital public services?

Rudolf Gridl:
Yes, thank you. Certainly it’s true that we are a country of huge data protection tradition. We have even a constitutional right for informational self-determination that has been created by our constitutional court in the 80s, very early in the process. And data protection is very dear to the heart of Germans. So if you regularly do surveys amongst the Germans, they will say data protection in relation to state and to companies, it’s very, very high on the agenda. If you look at the behavior in day-to-day lives, you see quite a different picture because as long as it is for the private sector, people are willing to share data and provide data to larger companies, to platforms, and so forth. Not so much to the state. So all the official channels are still a little bit mistrusted. What do they do with my data? So it plays a huge role for the acceptance of services that you can credibly argue your data is secure. And it’s not only the security, it’s also the data protection meaning. If we collect any data from the citizen, we will only collect it for the purpose that we are saying that we are collecting it. And we are not going to match it with, I don’t know, other data files. Like we are not going to match the health record. record with the employment record or things like that, which makes things much more difficult for the administration. It would be much easier if we had all these data files together at one place. But we do not, as you were saying, Gautam, we need a transparent process and a transparent administration. And we at the same time do not want a transparent citizen that the state knows everything about. So I give you one example. We were introducing this year a newly designed public transport ticket that is valid all over Germany. It’s one ticket valid all over Germany. And the idea was to introduce it only as a digital ticket only. And so, I mean, it’s a great idea. In Germany, you have like 50-something public transport systems and nobody knows what to do where. And this ticket is a huge convenience for the citizens. But there was a large discussion, is it legitimate for the state to do it digitally because of data protection, because of people who do not have a smartphone, who do not need necessarily your smartphone. You need only a computer, but it doesn’t matter. But it is one example that we had this discussion, now we introduced it, and people get used to it. And they say it’s a great idea and they want it to be continued and so forth. So the data protection, in my view, it’s important. It’s a principle. It’s very dear to the hearts. You have to break it down to a very concrete purpose. And when people see that the data they are providing, at the end, leads to better services and gives them a benefit in their daily lives, they are more than willing to do so. But it’s a struggle. And the second struggle, I don’t know if I have so much time left, no, okay, it’s just that we have, like India, we have so many layers of government. And I will delve into this in another context. Thank you.

Moderator – Christopher Newman:
Thank you, Dr. Griedl. That’s a whole other session. If you want to talk about federalism in Germany, stick around. We can talk about that afterwards. Thank you very much. Now, a question to you, Valeria. Ukraine has developed an app, a state in a smartphone, known as Diya, where citizens can carry, I think it’s around 14, or maybe it’s more by now, digital documents, like their driver’s license or their passport, on their phone. This to some people living in some countries is quite remarkable. How did Diya become a trusted solution used by already half of the Ukrainian population?

Valeriya Ionan:
Thank you, Chris, for this great question. So you all know that Ukraine has been called a European digital transformation tiger. And Ukraine is also the first country in the world where digital passports are totally equivalent to paper or plastic ones. Ministry of Digital Transformation in Ukraine is the newest ministry in Ukrainian government. We are only four years old. And we have a rare opportunity to bring new approaches, build and implement bold vision and deliver concrete products and services like Diya. So first of all, we have a great vision. We want to build the most convenient digital state in the world. And in order to achieve that, we have created an ecosystem of digital projects, which is called Diya, which has five projects. The first one is our state super app, which is used by 19.5 million users. And this state super app combines 14 digital documents around 30 services and digital signature. So even before the full scale Russian invasion to Ukraine, Ukrainians have been able to pay fines or to pay taxes through Diya. And when the full scale Russian invasion to Ukraine started, we’ve been able to launch new services just from three days to a few weeks in order to respond to those challenges that we’ve seen on the market. So just to give you several examples, when the Russian missile started to hit residential area, people started to go to shelters and they did not have any access to news. And that’s how and why we embedded TV and radio into Diya app. Then a lot of people had to relocate from their regions to another regions inside the country. And we have created a service in Diya that gave a possibility to receive a status of internally displaced person. And later, those people with the IDP status could receive a direct social and financial assistance also through Diya. Another great example, which relates a lot to the topic of our today’s session at Trust, it’s a service which is called eRecovery. So this is the possibility on the first stage to receive a compensation from state for damaged or destroyed property because of a full-scale Russian aggression to Ukraine. And the second stage is basically the possibility to deny your property rights online and receive a certificate for a new property also online. So this is a very complex service, not just from the technical side, but also from the side of the Trust. So Diya State Super App is just one project of our Diya ecosystem, which also includes State Portal of Public Services, Diya, where we have all of the services, the majority of services digitized and we plan to have all of the services digitized in a year. And basically, we have the fastest business registration in the world. So you can register your business online in Ukraine only for 10 minutes. Diya CT, which is a special economic and tax regime for IT industry, Diya Business, a separate project for the development of SMEs and Diya Education, a national edutainment platform for reskilling and digital literacy. Because if you are building the most convenient digital state in the world, people have to have at least basic level of digital skills, an opportunity to use services and benefits which state is creating for them. So Diya State Super App today is a love mark because basically, we had a lot of communication before launching this ecosystem and launching this app and explaining our citizens what is digital transformation and why it is important for every citizen. So we also, for example, count the effect of anti-corruption and transparency from digital transformation every year and we also communicate about this to our citizens. Also, we are engaging citizens into the process of the development of new services and basically of better testing of every new service. I think the probably most important thing about creating Diya as a love mark is not just the user-centric and human-centric product that completely changes the way how government cooperates with citizens, but it’s also a regular communication with citizens and explaining all of the benefits that citizens could receive from digital transformation.

Moderator – Christopher Newman:
Thank you very much, Valeria, for elaborating on Diya and how it is a tool for very direct communication also with citizens in addition to offering them services and documents. Before coming to Luana, I would just like to remind the online audience that the chat is open. You are able to post your questions in there anytime. We have two more questions and then we will be opening up to a Q&A. And now over to you, Luana. Brazil’s digital government strategy emphasizes several key principles in building trust and confidence in digital public services. One of these principles that jumped out at me when reading this is citizen-centricity. Can you elaborate on why you think this is central to making the digital transformation of government a success?

Luanna Roncaratti:
Yes, sure. I can also share some of the initiatives that we’ve been conducting on this subject and that’s true. The citizen-centricity is definitely a core value to Brazilian digital government strategy which was elaborated based on international experiences and also OECD recommendations. And we believe that it is about offering an easy and simple way for people to interact with government and also providing high-quality digital services. Brazil is a very diverse country and we need to fit to different backgrounds, to different digital skills. And this discussion is also connected to digital inclusion and also leaving no one behind, which is also a very dear value to our government. And to respond properly to all these necessities, we seek to be continuously hearing from the citizens. We have been conducting several user research projects to map the main difficulties that people have in those digital interactions to evolve our main solutions. And we’ve been hearing more than 3,000 people and we have conducted more than 150 projects so far. And as a result, we have developed some initiatives. We have learned a lot about the main difficulties that people face and those initiatives also work as platform as well. And some of them are not technological ones, but they help to ease the journey for people. And for example, we have worked hardly to promote the use of plain and simple language in digital tools because we learned that many difficulties people face are related to communication and not necessarily to technological tools. We also defined a design system because it helps a lot the visual communication and it presents the interface standards so that people can have a feeling of unique experience in interaction, in interacting with government systems. We also launched a quality lab and a quality model that creates standards to support digital services improvement and evolution. And also, we provide an API for seats and feedback, satisfaction assessment, and other user research initiatives. And there is also finally a tool that we developed that is called Vilibras, which automatically translates the web page content to sign language, and Vilibras makes more than 100,000 translations daily in our governmental web pages. And we are also working to provide tools to provide more self-services and personalized service and more proactive initiatives as well. And I think that all these initiatives have been able to improve inclusion, accessibility, and also the quality of digital services that the federal government now provides in Brazil.

Moderator – Christopher Newman:
Wow. Luana, thank you very much. We’ve covered a lot of ground in this question round, everything from building capacity at different levels of government to the importance of digital skills and meeting them where they are, which is often on their smartphone in many places, while not ignoring the question of inclusion and the topic of leave no one behind and accessibility, and of course, the importance of gaining the acceptance of citizenry also through clearly communicating how their data will be used when we ask for their data as governments. Excellent. Thank you very much. Thank you to the panelists. Before we open it up for Q&A, and I hope everyone here in the room is already thinking of a question or two that they can ask. We will only have time for one per person. A final question now, looking to the future, connected data sets together with advanced analytics open up new opportunities now to offer proactive digital public services, for example, based on life events. At the same time, the use of AI in public administration leads to the fact that citizens might find themselves confronted with a decision that was made by an algorithm and not a human. So now my question to the panel, again, and this time we’ll go in reverse order, is how can trust be built and maintained in an age where AI is increasingly embedded in public administrations? And I’d kindly ask you to limit your answer to two minutes per person so that we can get some questions from the online and offline audience. Duana, the floor is yours.

Luanna Roncaratti:
Well, first of all, we believe that there are some necessary actions that should be taken in order to build a trustable context, environment, and process in which AI is used in public administration. I would like to comment on four of them. Regarding transparency, we are convinced that users need to know when AI is being used and how it is working. And we know that this is a challenge. It’s not always easy to explain and to understand how the results are generated, but we understand that it is necessary that we make efforts to enhance transparency and to communicate properly how AI is working. Secondly, we know that AI decisions may carry cultural information that can lead to discrimination, biases, and prejudice. Then, when controls fail, users affected by the decisions must have the right to request review of the solution provided by AI. Thirdly, with the use of a lot of data combined for AI learning, they become much more attractive to hackers and data leaks. Then, we also believe that it is necessary to invest in privacy and security controls to mitigate risk and avoid threats. And finally, we believe it’s also important that each institution establishes adequate governance, which includes risk analysis, constant review of algorithms, analysis of data quality, and et cetera, to guide the actions that will prevent problems related to the use of AI and also the data misuse.

Moderator – Christopher Newman:
Doana, thank you very much. And with that, straight over to you, Valeria. AI and trust in public administration.

Valeriya Ionan:
Thank you, Chris. So again, building trust is a really complex and long-term process. However, when it comes to AI, it is important to balance between regulation and innovation. Addressing specifically the topic of AI, we in the Ministry of Digital Transformation of Ukraine have just recently presented the roadmap of AI regulation in Ukraine. So according to this roadmap, Ukrainian companies will cooperate with the international partners. Thanks to a legal regime identical to the EU, we will adopt a law similar to the European AI Act. And this will allow us to create identical legal regimes with the EU in the field of AI, simplify cooperation with European partners, and attract investments. We will also provide businesses with tools to prepare for future AI regulation, from assessing the impact of technology on human rights to signing voluntary codes of conduct. We will also publish recommendations to answer questions about what to do right now and what to expect in the future. And of course, a safe digital environment where human rights are protected in the digital space will be also created. Thank you.

Moderator – Christopher Newman:
Very to the point. Thank you very much, Valeria. Dr. Greedle, how can trust be built and maintained in an age where AI is used in public administration?

Rudolf Gridl:
So what we are going to do is we are using, we are starting to use AI, but we will only use it as a tool and not as a kind of decision-making tool. So what we will do is the first step is to have an efficient decision-making by AI. Sorry, Valeria, you can’t be in the front. Is that better? No, it’s better? Okay, okay, thank you. Okay, so shall I restart or no? Okay, so what we are doing is putting, for the time being, always a human at the end of the decision process. That is something that gives trust. Actually, it’s a psychological trust because, as we all know, AI sometimes is more reliable and more precise in decision-making. But we need to be, as a state, as a government, we need to take everybody on board. That is something that we are planning to do, and I hope that it helps us to create this dearly needed trust because, on the other hand, there’s another aspect of trust. It’s not the trust of the citizens. It’s the trust of the civil servants that are dealing with these processes and that are now owners of the processes and that need to be taken on board also into this process. I think for them also, if we want to create a holistic AI-driven government, it’s important to have the civil servants on board and so to give them or to empower them to give the decision also or to be the last ones in the decision line.

Moderator – Christopher Newman:
Thank you very much for raising those two important aspects, human in the loop and trust of civil servants and not only the citizens in the end. To round off this round, Gautam, can I ask you to share your perspective, please?

Gautham Ravichander:
Absolutely. So, I will reinforce what’s already been said. We also believe in the importance of ensuring there are humans in the loop. While government systems being rule-bound tend to be very translatable to AI, it’s still very important for citizens and for the government employees themselves to have the comfort that there are human beings who are reviewing this, that there’s an element of humanity that’s actually going into the decision-making processes. It’s not necessarily going to be more efficient to do this, but it’s going to be more trustworthy, and I think that’s more important during the short run. Over time, we also need to build in robust feedback and grievance loops. This is something that Luana also mentioned. I think it’s important for people to know when AI is being used to actually be able to raise grievance in systems where the AI system has not given a good answer. Beyond that, I think we need to look at a few opportunities that AI presents, right? So, for a country the size of India, the range of contexts in which we work, the types of languages in which we work, it’s important that AI can help ensure translation. So, one of the areas that we as India are definitely exploring is the usage of AI to speak across multiple languages. So, someone in the north speaking in a language called Punjabi could actually be…

Audience:
Yes, hello. My name is Rita from the New Club of Paris. I have a question about the digital inclusion or exclusion. We have here four countries, and if you’re not a citizen in the country, you may be excluded from the services because you don’t have the personal identifier. And that’s, for instance, if you don’t have an Aadhaar number in India, you’re bad luck. And I wonder in Ukraine or in Sweden or in any other country, if you don’t have that digital ID number, you are not a citizen of that country. You cannot receive any services. And I think this applies both in an increasingly global world. It excludes migrants but also expats and workers who work in different countries. So, I think this digital public services are very often exclusive services. How would you address those questions?

Moderator – Christopher Newman:
Thank you very much. I would suggest we throw this back to our panelists before taking any more to not have too many topics on here because they’re all quite big in themselves. So, we had the question of practical ways of showing how governments are dealing with data protection privacy, what it means, and inclusion of non-citizens. Do any of the panelists feel like they want to speak to either of these questions? I see Valeria’s hand up. Please, go ahead.

Valeriya Ionan:
If I may, thank you so much for these great questions. So, I’ll start with the first one and give you an example of DEA and what we are doing. So, for example, for DEA, we use the approach… DEA does not store any personal data. DEA uses the approach data in transit, which means that DEA connects directly to the state registers and shows the data which is needed. That’s like the first answer. To highly secure state registers, a very important remark. Second remark. The next thing is that, remember, I told that it is very important, a regular communication from government to citizens, right? About what digital transformation is, what are digital services, why digital literacy is important, what is privacy, and so on. So, citizens should understand that government already have their data. The question is how government uses this data, right? So, for example, when citizens will understand who and when checks their data in the registry and receive notifications, it is about respect for this data, about avoiding the misusage of this data. Citizens’ data belong to citizens and they have to know it. For example, we in the Ministry of Digital Transformation, we are also in the process of launching such push notifications for all registries. But the first stage, which we already done, was notifications about the revision for all the credit history in DIA. So, for example, if someone checks your credit history or opens a loan, you get notified in DIA. So, you open the notification, go to the link to the Ukrainian Credit History Bureau, and you can react quickly. And the same notification will come if you get a card with a credit limit or open a loan. So, there is like, you know, as you mentioned very correctly, it’s not a simple topic. It’s a very complex topic. On the one hand, you have to work with the prevention. You have to do a lot of communication. You have to launch big projects on digital literacy. You have to make digital literacy available for everyone, not only for those people who have their gadgets and Internet connection, but also for people of elegant age or people who are for some reason like excluded and have no Internet connection or no gadgets at home. You have to create opportunity for them, like going to some special places. Like in Ukraine, we have digital hubs with facilitators who can facilitate the first contact for the people with the gadget and platform and so on. Another thing is basically to explain those things. And the third thing is technical architecture, right? So, how your technical products are built and how do you basically notify people about using their data? Another good question was about the digital exclusion. Well, in Ukraine, for example, we still have offline centers of public services. So, if people don’t want to use digital services, they still can go and use them offline. But the thing is that and basically what revolution made DIA in Ukraine, DIA made digital transformation like a pop culture. DIA is a love mark. We have shown that basically the communication with the government could be as simple as communication with such startups as Uber, as Bolt, as Airbnb or Booking. Two clicks and everything is done. So, you don’t need to stay in lines for four hours. You don’t need to waste your time, waste your money. You have to leave and the less government people have in their lives, the better. And that’s what Ukrainians already understood. And that’s why and how we still can continue and we continue to build new digital products and digital services. So, anyway, it is not obligatory to use DIA. It’s just the will of people to make their lives easier. So, you still have both options. Thank you.

Moderator – Christopher Newman:
Thank you, Valeria, for bringing in the point that we also need to have offline ways to access these services, of course, for the people who don’t want to or can’t access them online for some reason. Do any of the other speakers want to pick up on this point or should we open for another round of questions? Luana, please.

Luanna Roncaratti:
I can quickly try to react also to the questions on the first one. We also don’t storage data. It’s basically a way to interact to make the different data sets that we have interoperable. And what we do concerning data protection and privacy is that we have been working a lot to prepare and to strengthen resiliency and the capacity of different public institutions so that they can safeguard and can protect the data that they already have and they may storage in their data sets and all the systems that they have. And also to communicate better so that people can understand and also have all the precautions necessary in their data. And on the exclusion, in case of Brazil, we have actually a number for foreigners who live in Brazil. They can have the identification if they live there so they can access the digital and also the physical sets that are provided. And in some cases, such as the public health system, even if they don’t have the number and even if some person arrives without any document or anything, they are allowed to be served and be attended in these situations. And we also have in some units, some agencies that can provide the physical response to people who deserve and demand the public services.

Moderator – Christopher Newman:
Thank you very much, Luana. I would like to pick up on that keyword of inclusion and also include our online audience. Here we have a question. And with that, I’ll hand over to the online moderator, Sascha, my colleague, to please share the question.

Sascha Michael Nies:
Yeah, thank you very much. So, of course, we also appreciate questions by our participants online. And we do have a question to the panel on cybersecurity and its relevance for trust in digital government services. I believe some of it has been covered already by our panel. However, the question would be in how far cybersecurity matters in your experience, considering all the aspects discussed today already, such as user-friendliness, ease of access, reliability, and so on.

Moderator – Christopher Newman:
Thank you, Sascha. The question of cybersecurity, the digital elephant in the room that has not been addressed explicitly, perhaps. We’ll take one more question. And this, unfortunately, will then also be the last question. I see a question over there. And there’s a microphone right there by chance. So, Franz, please.

Audience:
So I have, in a way, a related question to cybersecurity. You asked previously how to deal with trust in the age of AI. I ask how to deal with trust in the age of cloud computing in the context that most governmental services are moving to the cloud. And most countries rely on foreign cloud infrastructures, be it Huawei, be it AWS, be it Azure or whatever. So only two countries in the world kind of are having their own domestic cloud operators, and the rest rely on foreign cloud operators. And our partner governments in Africa are quite concerned about the digital sovereignty of their public services running on foreign cloud infrastructure.

Moderator – Christopher Newman:
Thank you very much. I think we have, was there one question? Final question back there. I see it’s a burning question, and then the other two we’ll put together. Okay, please be brief.

Audience:
Yeah, so I’m Glinda. I’m from the Philippines. So since AI gives feedback from the information it sees, gathers, and analyzes from the cloud or Internet, how do we prevent AI from divulging critical information from our systems, databases, and websites? And what limits AI in giving and what it must just give publicly and be cautious in giving information that needs to be kept private?

Moderator – Christopher Newman:
That’s it. Okay, that sounds like a whole other session in and of itself. Thank you for the question. We now have approximately four minutes remaining, and therefore I would like one or two panelists to pick up on the issue of cybersecurity and cloud computing and perhaps another comment on the question of AI, how do we ensure it doesn’t go spilling all our governmental secrets and the secrets of all our citizens? Dr. Greedo, please.

Rudolf Gridl:
Very briefly on the issue of cloud and cybersecurity, that’s a very relevant issue. And as German government, we are very intensively, is it still on? We are working on a two-track solution, either building our own like federal German cloud, which is perhaps feasible for some very dedicated services, but also we are working with international cloud providers on trying to modify their cloud systems in a way that they will be sovereign clouds for Germany. So we are discussing. We will see where we will get, but that’s very important. And the same goes for the cybersecurity, of course. Everything that we are doing in our state system, we are doing it inside a very highly protected cybersecurity-enforced system. I think that is not the challenge. The challenge is to do it user-friendly and cybersecure.

Moderator – Christopher Newman:
Thank you very much for this perspective. Looking to our online speakers, the questions of cybersecurity and cloud computing and AI, anything to add briefly before we wrap up the session? Okay. Oh, I see the hands. I’m sorry. I was looking for physical hands, and there are the virtual hands right in front of us. Okay. Then, Gautam, over to you, please.

Gautham Ravichander:
So on cloud computing, pretty much what Germany is doing, there are similar approaches that India is adopting. I’ll just add one additional piece on cloud. I think what we see is this preference for if the server is in my room, it cannot be hacked, which is a bit of re-education that often now might have to do with folks who are coming into the space. But I think the other element then comes back to the costs of actually maintaining that kind of infrastructure when you kind of run that against the cost of working off the cloud. A lot of times the government officers then quite quickly understand that it’s much better for me to use this as a service rather than to actually build out an entire new team and distract my own attention and my resources on this. On the piece about privacy, I think, and cybersecurity, I think all the points that have been said so far I completely agree with. But primarily, I think one element on cybersecurity that keeps happening with government systems is communication. When you do have breaches or if you’ve dealt with breaches well, keeping people informed, keeping them informed in a way that builds trust is more important than saying it didn’t get breached. Sometimes we also have to train people up front because some of the breaches are not because people hacked them, but because somebody inadvertently released data. And that kind of capacity building is very, in our minds, when we think of like, oh, people are hacking in front of us, this is actually the low-grade capacity building, but it’s the foundations of cybersecurity. So about 50% of breaches happen not because somebody hacked your system, but because somebody released information inadvertently.

Moderator – Christopher Newman:
Thank you very much, Gautam, for raising that point. Valeria, your hand is still up. I’d like you to be brief in your comments before we wrap up the session.

Valeriya Ionan:
I’ll try to do it. I just wanted to give a small story. Well, a few months before the full-scale Russian invasion to Ukraine, we’ve seen the increase in the cyber attacks to Ukraine. And at the same moment, we have been working on a new law, which should have allowed us to transfer data into the cloud. So just a few weeks before the full-scale Russian invasion to Ukraine, this law was adopted, and we moved all the data into cloud. And then when the full-scale Russian invasion started, just in a week, a Russian missile hit the data center physically. It destroyed the data center where we used to store backups. But our data was already in the cloud. So I would just like to address the question that, to our mind, there is no unique solution to that. You always have to balance. And it does not mean that you don’t need a data center or you have to store data only in cloud. So you have to balance. And the same comes about cooperation with different partners. We believe in a golden triangle. Governments should work with the private sector and civil society and find the best ways to cooperate for the mutual benefit. And about the cybersecurity, of course, we take it super seriously. And when it comes, for example, specifically to DIA ecosystem, we have our own red team who is working on a daily basis to find even minor vulnerabilities. We also conduct bug bounties twice a year, and we do plenty of other measures. So cybersecurity is a very big and very important topic. We take it super seriously, and we would be really glad to share our insights, maybe in the next session. Thank you very much for your attention.

Moderator – Christopher Newman:
Thank you very much, Valeria. Dear panelists, thank you for all your inputs, your contributions. Thank you to the audience who is here. I’m sorry we could not take all your questions. It was a very good discussion. Feel free to hang around, float outside with us later. We can continue the discussion. And I wish you an insightful IGF 2023. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

156 words per minute

Speech length

348 words

Speech time

134 secs

Gautham Ravichander

Speech speed

194 words per minute

Speech length

1698 words

Speech time

526 secs

Luanna Roncaratti

Speech speed

145 words per minute

Speech length

1156 words

Speech time

479 secs

Moderator – Christopher Newman

Speech speed

163 words per minute

Speech length

2578 words

Speech time

947 secs

Rudolf Gridl

Speech speed

147 words per minute

Speech length

1206 words

Speech time

492 secs

Sascha Michael Nies

Speech speed

165 words per minute

Speech length

85 words

Speech time

31 secs

Valeriya Ionan

Speech speed

183 words per minute

Speech length

2206 words

Speech time

725 secs

Consumer data rights from Japan to the world | PART 1 | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Javier Ruiz Diaz

Javier Ruiz Diaz, a respected Senior Advisor working on Digital Rights for Consumers International, is actively encouraging collaboration around data governance within the culturally rich and diverse Asia-Pacific region. Consumers International is a global coalition comprising a collective of 200 member organisations that span an impressive 100 nations. This influential group elicits a positive sentiment in its vision of fostering metaphorical harmony in the approach to regional data governance.

Diaz acknowledges the potential of the Asia-Pacific region; its unique position as a cradle of technological innovation and a hub for emergent consumer and digital rights organisations will enable it to contribute priceless ideas and proposals. This untapped capacity has spurred the need for discourse and collaboration within data governance. Accordingly, Diaz is observed fervently advocating for the greater inclusion of this region in global dialogues on data governance, assured of its meaningful potential contribution to the dialogue.

Simultaneously, Diaz is organising a proactive follow-up intervention. This initiative seeks to bridge the gap between consumer and digital rights organisations and policymakers, creating a unified approach to further discussions about data governance in light of rising concerns about consumer rights in the digital era. This collaborative approach aligns harmoniously with the guiding principles that map to the Sustainable Development Goals (SDG 16: Peace, Justice, and Strong Institutions), and mirrors a commitment to establish a robust regulatory framework in digital policymaking efforts.

In summary, the evolving narrative underscores Diaz’s pivotal role in creating innovative partnerships in data governance. It highlights a resonance with SDG 16, advocating for just practices in regulatory landscapes, further solidifying his commitment to peace, justice, and strong institutions. Moreover, his initiatives in synchronising collaborations with SDG 9: Industry, Innovation, and Infrastructure bear testament to his dedication in advocating innovative solutions that pave the way for sustainable infrastructural development.

Amy Kato

Review and Edit: Check for grammatical errors, incorrect sentence formation, typos, or missing details, and make necessary corrections. Ensure UK spelling and grammar are being used in the text, and correct it if not. The expanded summary should mirror the main analysis text as closely as possible. Ideally, include as many long-tail keywords in the summary as fitting, without compromising the summary’s quality.

NAN

The Indo-Pacific Economic Framework for Prosperity (IPATH), a confidential agreement involving 14 nations and governed by the U.S., has drawn much attention due to its significant implications for digital trade, data privacy, and data protection.

IPATH is projected to conclude by November 2023 and has a critical commitment to enforceable cross-data flows. This key aspect has instigated apprehension, as it is perceived as a significant barrier to enhanced data privacy and security. Critics suggest that these enforced requirements could disrupt protective measures for cross-border data transfers, undermining privacy protections; therefore, posing substantial barriers to data privacy and security. This could lead to data being transferred to countries that lack stringent data protection measures.

A contentious aspect of IPATH is the forced non-disclosure of source code and algorithm details. Critics argue this might lead to algorithmic discrimination whilst undermining transparency and accountability. Such restrictions could impede independent verification of how software functions, profoundly impacting the trajectory of AI regulation at regional and national levels.

NAN, a participant in the IPATH negotiations, has expressed its opposition to the initiative. NAN highlights the potential for U.S. control over data flow and transparency in AI and coding, deemed detrimental to Southeast Asian and South Asian countries’ interests. The inclusion of U.S.-Mexico-Canada Agreement (USMCA)-like provisions within IPATH, according to NAN, could limit regulatory options and subject data to the lower-standard data protection norms in the U.S.

In conclusion, although IPATH is promoted as a means to boost prosperity in the Indo-Pacific region, its potential consequences in terms of data privacy, protection, and digital rights have elicited considerable anxiety and resistance. Use of UK English verified; no grammatical errors, typos or omissions detected.

Jam Jacob

Launched in 2011, the Asia-Pacific Economic Cooperation (APEC) Cross Border Privacy Rules (CBPR) system was set up to oversee data governance and privacy. However, it exhibits limited efficacy, supported by the fact that only 9 out of the 21 member economies choose to participate to date.

The certification process for CBPR consists of several phases, beginning with a self-assessment stage, proceeding to an assessment by the accountability agent, followed by a recommendation phase, and culminating in the awarding of the certification.

Nonetheless, the CBPR system has drawn substantial criticism. A prime concern is its inherent tie to the privacy framework established by the Organisation for Economic Cooperation and Development (OECD) in the 1980s – a framework now considered outdated by many. This correlation gives rise to uncertainties about the system’s aptitude to adapt to the fast-evolving digital landscape.

Additionally, the high costs associated with obtaining a CBPR certification – a sum ranging from $15,000 to $40,000 – serve as a deterrent for smaller or less financially well-off organisations to participate.

Further complicating matters, civil society’s inadequate representation in CBPR dialogues and decision-making processes results in a governance approach that is largely market-driven. This flaw could result in the overlooking of broader societal interests and concerns.

In 2022, the more encompassing Global CBPR Forum was introduced. This entity has a wider operational remit compared to the APEC CBPR, leading to speculation that it may render the traditional APEC CBPR system obsolete.

If the Global CBPR Forum indeed offers more thoroughgoing and effective data privacy solutions, it may precipitate a significant shift in the data privacy and governance landscape. However, further research and observation are necessary to verify this potential outcome.

In summary, the APEC CBPR system – although launched with laudable intentions – appears to be encumbered by several key shortcomings, including high costs, limited adoption, linkage to an outmoded privacy framework and underplaying of civil society. Emerging platforms like the Global CBPR Forum might provide alternatives and potential enhancements in the future.

Pablo Trigo Kramchak

The Digital Economy Partnership Agreement (DIPA), a ground-breaking trade agreement, has sparked considerable debate due to a range of features it encompasses. It has been noted that DIPA ostensibly mirrors the provisions of the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) with respect to cross-border data transfers. Effectively, the DIPA rules that oversee cross-border data flows mirror those of the CPTPP provision for cross-border information transfers. This alignment is evident in the DIPA’s provisions concerning data flow regulation, as stipulated in Article 4.3, which confirms the parties’ commitments that were embodied in prior agreements.

Another pivotal element in DIPA discussions is its pronounced alignment with the United States’ data governance model. The provisions of DIPA exhibit significant conformity to the approach the United States advocated during the Trans-Pacific Partnership (TPP) negotiations, which formed the basis of the CPTPP agreement. The potential implication of this alignment is that broad acceptance or replication of these terms could effectively result in a de facto standardisation under the American data governance model, according to some critiques.

In spite of its status as an innovative instrument among Free Trade Agreements (FTAs), DIPA has garnered critique for its apparent lack of progress in terms of cross-border data flows. Critiques propound that DIPA fails to carve out a new path in this sphere as it doesn’t lay down minimum standards for personal information protection, instead advancing interoperability via the adoption of voluntary self-regulatory approaches.

Further, due to its reflection of older agreements, the accord could create significant challenges for countries not part of the CPTPP. Consequently, these nations may find complying with DIPA’s terms particularly challenging.

In conclusion, despite its original intentions, DIPA has provoked contention due to its firm affirmation of past agreements, lack of novelty in terms of cross-border data flows, and echoing of the US data governance model. The concerns raised offer valuable insights into the possible implications of broad acceptance or replication of DIPA’s terms, underscoring the necessity for further discussion and careful evaluation.

Minako Morita-Jaeger

In this comprehensive analysis of global data governance models, three predominant approaches are identified, each exemplified by a unique geopolitical entity – the European Union (EU), the United States (US), and China. The EU, focusing on a human-centric methodology, places emphasis on the protection of human rights, fair competition, and effective moderation of platform content. Conversely, the US’s philosophy favours a less intrusive government role and is predominantly market-led. Lastly, China’s state-driven model seeks to establish technological dominance, promote data sovereignty, and exercises robust government surveillance.

When examined from an International Trade perspective, trade agreements frequently prioritise the unobstructed flow of data across borders. This dedication to free data movement presents a significant challenge when attempting to harmonise with critical aspects such as data privacy, fair competition, and intellectual property rights: elements potentially compromised by free data flow agreements.

It’s equally noteworthy to observe the stark disparity in domestic data governance policies among countries aligned within the same trade agreement. This nuance is evident among signatories of the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP), where countries like the UK demonstrate superior regulations, responsibility towards use policies, engagement with stakeholders and adherence to international norms, in stark contrast to other signatories such as Chile, Malaysia, Peru, and Mexico.

The pursuit of promoting free data flow with trust is a formidable challenge. As a response, some advocate for more grassroots, multi-stakeholder engagements. Currently, data protection is often viewed as an impediment to market access within the framework of trade agreements. This varying interpretation of ‘Free Data Flow with Trust’ underlines the complexity and vast scope of challenges confronting global data governance. In essence, these disparate understandings demonstrate the significant hurdles on the path of responsible and efficient global data governance.

Paula Martins

Paula Martins leads the Association for Progressive Communications (APC), a global networked organisation with 103 members in 74 countries. APC primarily focuses on social, environmental, and gender justice while interweaving technology and data governance. Besides its primary members, APC also partners with environmental and gender organisations that tackle digital issues, positioning data as crucial to a spectrum of operations.

The advocacy and implementation of appropriate policies are central to APC’s work across various regions, thus making data pivotal to their actions. APC has an expansive reach, made evident by their 24 affiliates in Asia, underlining their impressive global presence.

APC has formed a strategic alliance with Consumers International. This collaboration aims to broaden an understanding of data governance within their sphere of operation. The central objective of the partnership is to enhance information sharing regarding progress in regional data governance and to foster an environment that encourages networking among partners. This joint venture seeks to identify and act upon opportunities that would further their comprehension of data landscapes and contribute directly to targeted Sustainable Development Goals (SDGs).

Their SDG focus includes Gender Equality (SDG 5), Industry, Innovation and Infrastructure (SDG 9), Peace, Justice and Strong Institutions (SDG 16), and Partnerships for the Goals (SDG 17). By aligning their work to these specific development goals, APC is poised to make a significant impact in the fields of technology and data management, coupled with a commitment to essential sectors like gender and environmental justice. This positive action towards digital rights and data governance, combined with their capability for collaboration, characterises the current landscape in which APC operates.

Session transcript

Javier Ruiz Diaz:
Thanks a lot for bearing with our last technical checkups. So my name is Javier Ruiz, I’m a Senior Advisor on Digital Rights for Consumers International. It’s a global coalition of consumer organizations from all around the world. We have 200 members in over 100 countries. And we are here today together with Consumers Japan and the APC, the Association for Progressive Communications. And we put this workshop together to, as you saw, to try to start promoting some collaboration in the region around the issues of data governance. Because as we see, the Asia-Pacific region has got a lot to contribute, and it’s got a lot of ideas and proposals for how data should work, which are quite influential globally. And we think that we want to see more discussion from consumer groups and grassroots organizations on this topic. And also to connect these debates with some of the discussions taking place as well. So we have some colleagues here from Austria and from elsewhere coming to also talk about what’s happening. So I’m going to let Amy Cato to introduce Consumers Japan, and then Paula. And then after the brief introductions, we will start with the speakers. And just to give a very brief order of the day, we are not going to keep a totally regimented timetable. We are going to try to be flexible, depending. But the idea is roughly that we are going to have one hour of presentations and discussion on the various data governance initiatives and policies that are taking place in the Asia-Pacific region, which includes the cross-border privacy rules, the IPF, DEPA, Digital Economic Partnership, and similar. Then we will have a second block of roughly, possibly an hour, although we may start getting shorter as we go along, looking at national context and what is happening in Japan, what’s happening in Korea, and what’s happening in other countries in the region. And then the final block would be more like a collective discussion to try to organize some follow-up intervention. So one of the things that we want to see is not just a discussion here today, but trying to get some idea for where consumer and digital rights organizations should intervene, trying to engage with the policy makers on these topics. So I will let now, just after a brief overview, I will let Amy Kato. You want to just introduce Consumer Japan?

Amy Kato:
Hi, my name is Amy Kato from Consumers Japan. Thank you.

Paula Martins:
Hello, good afternoon, my name is Paula Martins. I am Policy Advocacy Lead and Program Manager at the Association for Progressive Communications, APC. APC is a networked organization. We have members and associates in 103, I’m sorry, I’m going to start again, because I’ve really got confused with the numbers, so bear with me, it’s jet lag. So APC is a networked organization, and now I’m going to get the numbers right. So what we have are 103 members and associates in 74 countries. Apologies for that, including some of which are here in the room today and collaborating with this conversation. We have 24 members in Asia. Most of these members are in the global majority countries, and they are very diverse members. We all work on the intersections between social, environmental, and gender justice, and technology. So they are, broadly speaking, digital rights groups, but they really are diverse in terms of the focus that they have in the work that they are doing at the national and the regional level. Data is key to a number of them in different ways. So you have gender organizations, you have environmental organizations, looking at digital issues where data is a critical element of the advocacy and the policy, the capacity building that they are doing. So this is central to the discussions that are taking place within our network. And we were really happy to join efforts with Consumers International to put together this session. Our view, our idea is really to create a space to share info, to learn more about what’s going on in relation to data governance in the region. But also to promote more synergy, including among us, and the idea of bringing together our networks, our partners working on consumers’ rights and digital rights, so that we can explore concrete joint actions, maybe following up to this discussion. So thank you all for being here and joining us today. It is a pleasure to be here in Kyoto.

Javier Ruiz Diaz:
Thank you, Paola. So now, our first presentation today is going to be Dr. Minako Morita-Jager, who’s a senior research fellow on international trade at the University of Sussex in the United Kingdom. And she’s going to give us an overview of the data governance based on her research. So please, Minako.

Minako Morita-Jaeger:
Thank you, Javier. My PowerPoint. Yes, lovely. And then you can, yeah, and you can change it when I, I mean. Can you just wait, please? Then maybe, I can, okay, thank you, yeah. Good afternoon, everybody. My name is Minako Morita-Jager. I’m based in the UK. I’m still, I just came to Japan two days ago, and I’m still suffering jet lag. If I fall in sleep in the middle of my talk, give me a shot, but very gently. Please. So I’m going to just give you a very, you know, that wide picture of what is going on at international level. So I’m working at the University of Sussex. Also, we have that kind of think tank with a co-established together with the Chatham House. We have the UK Trade Policy Observatory, where I’m doing the research policy, policy research for that. And then also that the Center for Inclusive Trade Policy. We are promoting trade policy for all stakeholders equally. Because I’m trade policy expert, I’d like to just explain that kind of the linkage between the data governance and trade as well. First of all, I think you know very well, but what is data governance? And the definition here is maximizing opportunities while protecting the rights. That is the governance. And then, but according to World Bank, sorry, it’s not from the United Nations, but not only good data management, but establishing norms and rules about rights, principles, and obligation around the use of data. Multistakeholder approach is a key for the data governance. This is why we are gathering here today. So when we think about data governance, so this is a kind of welcome to the world of tech hegemony. So there, well, in academia, we had a kind of general understanding of the three types of data governance. First, it’s the left side, the EU. This is a human-centric approach of human rights. It’s a fundamental right of the EU constitution. And they just, the EU is more kind of the promoting the, you know, the more for protecting the human rights, and then also the fair competition, and then platform content moderation. So that all stakeholders equal the benefit from the digital economy. And the opposite or the kind of sort of the contrast is the US type of approach. This is market-driven approach. So that gives really minimum or almost zero in the government intervention, and the market everything, and giving the kind of freedom to conduct business. So free economy, digital economy. And then giving the kind of self-regulatory framework regime. That is, but a base of the, you know, principle here is free speech. It’s not a little bit different type of approach, you know, in comparison with the EU. It’s a free speech. It’s not human rights, but a free speech is a really key for the United States. And then government is kind of taking sort of the partnership, very close partnership with the big tech companies to promote this digital economy strategy. Then lastly, China. China takes the state-driven approach. The government seeks to achieve the technological dominance at the international level, and then promoting data sovereignty. That means the, well, the Communist Party, China Communist Party, really promoting strong surveillance over its citizens. And then sort of control people’s freedom for the sake of the political agenda or propaganda. So this is the three types. And they are fighting each other horizontally, and then also vertically. For example, because of tech hegemony, the American companies doing business in China, fighting with the Chinese government, or just give up market, Chinese market coming back to the U.S., or vice versa, and then Chinese companies in the U.S. have to be, to give up the U.S. market because of this enhancing, very increasing technological rivalry between China and the U.S. So the one thing that’s, in addition to these three major type of the data governance we see on the other international level, I also would like to add the one more kind of the group, which is Asia-Pacific countries. That is, I would say, the Asia-Pacific country is taking the trade-centric approach, I will say. Then that means, over the last several years, like Australia, Singapore, New Zealand, and also Japan, and then Korea, promoting the digital trade agreements or digital trade chapter inside the free trade agreements, and then try to promote free data flow. But the difficulty here is because of trade agreement is something that promoting trade, it’s really the real priority. So the balance with the data privacy, fair competition, intellectual property rights, and that’s really sort of the second layer of the objective. And then that what is they are so far creating the FTA is really focusing on the free data flow and openness does matter. So that means now that we are talking about from this morning, Minister Kono Taro, that Minister Taro said, well, data DFFT, data free flow with trust, and that is really sort of, it’s not compatible from the trade policy perspective. This is more from the data free flow, per se. So trust, how to just create trust under the trade framework is really now getting to the very difficult point. I think then other speakers may just talk more about the free trade agreements later. But the thing I would like to say is, as I said, that there is a three type of data governance at international, US, EU, Chinese time. But this market-driven approach is something that from 1990s, US government, that time is Clinton’s administration, promoting internet freedom agenda. And then that’s really embedded in the trade agreement, like the one thing is, for example, CPTPP, the provision is drafted by Google, actually. And so this is really the tech giant is what I would like to do is this way, is really written in the CPTPP. So this CPTPP became the base of the digital trade agreements these days. So when we just look at the international perspective, there’s a trade agreement, which is a sort of the given transparency. But on the other hand, the market-driven approach, and when we look at the countries by countries, even having the countries among the countries which had a very sort of deep digital trade provisions, they are taking completely different approach in terms of domestic data governance. For example, when we look at the regulatory perspective, this is a left side up. This is a sort of government’s legal regime around data uses and then reuses. For example, CPTPP, FTA, recently United Kingdom joined the CPTPP. But comparing with other CPTPP members, regulatory framework that the UK is really the best, and then especially the data governance, European countries have very good quality or the high quality data governance, so that the UK is really the top among the CPTPP countries. When we look at the responsible, look at the UK is really 100%, but other countries in the CPTPP is almost nothing, especially like in emerging countries, like such as Chile, Malaysia, Peru, Mexico. It’s really they don’t have the responsibility. They don’t promote such as a data charter, responsible AI initiatives and so on. They don’t have this kind of the law or regulation inside the countries. When we look at participatory, this is a stakeholder, to what extent wide variety of stakeholders participate in the policy making. Again, the United Kingdom is 100% and in Australia, New Zealand, somehow that Canada is more transparent, but other countries, not really. The stakeholders that cannot participate are not fully, or not at all participating in trade and data governance making. And then finally, international level, this is to see to what extent the government join efforts to establish shared governance rules, like convention, the Human Rights Convention. Again, here, even in Singapore, which is really the lead promoter of international trade agreement in this chapter, they are lacking the kind of human rights protection perspective. So, what I’d like to say is that today, the free data flow with trust is something that is very important, but still political level. And when we look at the domestic level, also the horizontal battle between the three major giant is in practice promoting or implementing free data flow with trust is very difficult and especially the role which trade agreement plays is very limited and also given a kind of the challenging the way that the WTO free trade agreement is more that looking at human data protection is a sort of the way to just the non-tariff measures we say the obstacle for the market access so we don’t know this is why that we have to think about how we promote free data flow is trust was why the variety of stakeholders engagement so interoperability is something that we really have to start or promote from the bottom-up level it’s not a top-down but the norms and then after date free data flow with trust is also the very different interpretation among countries so I stop here

Javier Ruiz Diaz:
so thank you so much so Minako has given us an overview of the issues around data governance and particularly as she has described you know how connected they are to digital trade which is one of the really important frameworks to understand this space now we are going to start going through some of the main data governance spaces and initiatives in the region and now Jamel here he’s going to give us an overview of the CBPR so I’m going to change the slides here

Jam Jacob:
thank you Javier and good afternoon everyone so my name is Jam Jacob I’m from the Philippines and I’m here representing the foundation for media alternatives it’s a civil society organization working on the shared space between human rights and technologies so as Javier mentioned my task for this moment at least is to provide an overview of the Asia Pacific Economic Forum’s CBPR system or cross-border privacy rules system which is one of those mechanisms currently in place that’s supposed to regulate in some way the flow of information personal data in particular so we are we were we have been talking about data governance so as far as the APEC CBPR is concerned it’s it zeros in on one aspect of data governance which is the flow of information so briefly the CBPR is actually the as I mentioned was developed by the APEC it was launched basically around 2011 and it is still currently in place but as I will be discussing in a bit its future actually is quite in question given this other system that has just been launched in the middle of last year so what is the APEC cross-border privacy rules system so in a nutshell it’s a certification system developed by the 21 member APEC group and the objective here is essentially to facilitate the free flow of information so that’s a familiar phrase that we’ve been hearing so far during our short time here today to facilitate the free flow of information at least among those economies participating in this particular system free flow while at the same time ensuring there is supposedly adequate data protection or data privacy measures so how does the APEC CBPR system work so if you are an organization that’s based in any of these at least nine member economies currently participating in this system you can get yourself certified rather and by doing so once you become certified you are essentially able to at least this is the idea you are essentially able to transfer personal data to another certified organization in another APEC economy that’s participating in the CBPR system so that’s essentially the benefit that you get if you become certified now how do you become certified it’s essentially through an assessment and this assessment has two components the first one is basically self-assessment you are given a questionnaire as an organization you are given a questionnaire by one of these so-called accountability agents and the objective of this questionnaire is to determine how much your data protection policies and practices measure up or are aligned with the so-called program requirements so this program requirements of the CBPR system we can more or less look to them as the standards against which all certified organizations are assessed or are evaluated and then once you are done accomplishing this questionnaire you turn it over to the accountability agent and this accountability agent also performs an independent assessment so more or less it verifies or checks how accurate your own self-assessment was in terms of your ability to meet the so-called program requirements then after this two-part process if the accountability agent is satisfied it recommends that your organization be granted or be given such certification so it recommends to the APEC body that that group within the APEC to provide you with that certification and then once that is done your name as an organization and a few other details pertaining to your certification is displayed on the APEC website now just two other things to complete I suppose that picture is who are these accountability agents they may consist of private entities as well you apply to become an accountability agent with your with your government or whoever within the within your country is responsible for your country’s part or your economy’s participation in the CBPR system it is possible for a government agency to become an accountability agent so that is very much an option as well and then finally how do you become as a as an economy how do you become a participant to this system you also apply to the APEC privacy subgroup and they screen your your application I don’t think it’s that complicated we don’t have enough time to go over the specific requirements but suffice to say it has its of four requirements as a country as an economy if you want to participate you comply with those four requirements and that essentially jumpstarts the process of you joining this particular system so next okay so given that this is how the CBPR system works what have parties so far seen as the so-called benefits of participating in this system for proponents certainly they say that by taking part in the system as an organization you are able to present some tangible proof that you are at least committed to upholding data protection or data privacy within your organization and specifically when you carry out data transfers across borders it helps also as far as governments are concerned this supposedly benefits them as well because it more or less identifies which are those organizations that have that are more or less likely to comply with their own respective data protection laws in any given particular country that’s participating in the system second is it creates a common set of standards as we all know now while the GDPR stands out certainly among this growing number of data protection laws around the world there is that clamor already to have one set of standards so as to make compliance especially among businesses organizations easier so by having the CBPR system in place those so-called program requirements there’s they represent that common set of standards of course whether those standards are effective or insufficient that’s a different conversation altogether and then finally proponents say that this system this mechanism is good because it does not disrupt local regulatory environments and by that we mean if you have a date for example if you use Japan as an example Japan has its own data protection law by participating in the CBPR system it does not in any way change the regulatory requirements of the of the domestic data protection law if you are required to perform or to observe specific regulatory obligations none of those change you are still required to comply with all of those things even if you are a Japanese organization that is certified under this particular system now with those as benefits critics and other observers also have noted a lot of issues or problems with this particular system one is it’s in a it does not actually provide adequate data protection the CBPR system has the APEC privacy framework as its main guidance document if you will and the APEC privacy framework is essentially rooted in the OECD fair information principles which dates back to 1980 if I’m not mistaken and as pointed out by a lot of critics while the OECD principles have actually been updated I think it was in 2013 the APEC privacy framework has not it has remained stagnant since it was developed around 2003 or 2004 so there’s that and then you have this small buy-in among even the APEC members so APEC has 21 members and only 9 currently are participating in the CBPR it actually has a partner system which is the privacy recognition for processor system and in that part and that one focuses on data processors and that one only has two participants I think that would be the US and in Singapore so I guess that that’s that also shows or is indicative of how effective this system is if even among APEC members that even half see it fit to participate in this mechanism so what signal does it provide to to others it lacks positive influence on domestic laws I think many would consider the GDP as currently the gold standard as far as data protection laws is concerned and this much is evident when we see all of these new data protection laws cropping up all over the world and certainly in our region in Southeast Asia and the influence of the GDPR is very much evident but because of the nature of the CB of CBPR of APEC CBPR wherein it does not supposedly change any of the existing data protection laws and does not compel any government participating in the system to change their existing data protection laws so it has very limited positive impact as well there is that under representation of civil society so while this is mainly backed by the government it requires significant participation by the private sector especially when we consider that accountability agents actually are mostly part of the private sector themselves and civil society is mostly left out of the conversation so if we are talking about the three types of data governance mentioned earlier one would think that civil society would be the ones to push more for a human rights centric type of governance but because they are left out of the conversation for the most part we have we see the second type of governance more evident here which is the market driven one. As far as the legality and enforcement of legal challenges it also the issue here also can be traced to the APEC itself because the APEC unlike other regional organizations it has no chart it has no constitution to speak of it’s not a treaty so any mechanism it develops is mostly consensus based so there are no existing mechanisms that would really a strong mechanisms at least that would really compel governments to abide by the requirements of this system and more so those organizations actually certified under this system. And then there is the question of fragmentation this is actually quite ironic in the sense that proponents of the CBPR system because they say that it creates a common set of standards they say that it tends to solve or at least helps avoid fragmentation by providing that common set of standards but if you look at the CBPR itself because it is focused only in data controllers and you would require another system the PRP I mentioned earlier to also deal with data processors so it is also inherently fragmented unlike other systems or mechanisms in place that already takes into account data controller data processor relationships and all these different permutations. And then finally there is that issue of cost it’s not actually cheap to get yourself certified and it’s not and it was not very easy for us to look for actual figures to determine how much it cost but there is this at least one accountability agent based in the US that provides a rough estimate so they say that it takes an organization over there between $15,000 to $40,000 to get itself certified. In Singapore they only provide the $400 amount to as I think it was an application fee but the assessment fee itself there is no figure that we were able to secure to provide again a general estimate of how much it cost to get yourself certified for instance if you were in Singapore. And before I end I would allocate just this one slide about the global cross-border privacy rules forum. So why is this relevant when we’re talking about the APEC CBPR? Well as I mentioned earlier this was established just last year in 2022 and it is important because it is essentially a replica of the APEC CBPR system and its partner system the PRP systems. Very much a replica in the sense that the same countries who are now participating in the APEC CBPR are actually the same countries also behind the establishment of the global privacy rules forum with the exception of Mexico. And then all its elements at least so far because they are still in the process of developing it with additional details so far what we’ve seen is that all the different mechanisms the elements of APEC CBPR have also been transplanted to the global forum. Even the accountability agents recognized under the CPPR they will be automatically recognized also under the global CBPR forum. There are some small or changes or differences like in the forum they now recognize two types of participants. You have members and then you have associates. are essentially economies or countries that are looking to become members but are not yet immediately ready to do so, and the example we have right now is the UK, who have actually not just signified, I think, but they are actually, if I’m not mistaken, already an associate of the Global CBPR Forum. So that’s essentially why this is critical, or at least a very important part of the conversation, because if the Global CBPR Forum actually progresses, the question of sustaining still the APEC CBPR becomes very valid, why still maintain the APEC CBPR when you already have this system, a new system which is broader in scope, in operation. But for the moment at least, these same countries behind the APEC CBPR and Global CBPR Forum make it very clear that these two systems are independently operated. So supposedly they do not affect how the other operates, but yeah, we’ll have to look again at this particular situation in the future, depending on how much things progress as far as the Global CBPR, Privacy Rules Forum, what happens to it. If you are interested more in additional details about the APEC CBPR, and to some extent the Global Privacy Rules Forum, we already have the report available on that URL that you can see on the screen, and you can download it later.

Javier Ruiz Diaz:
Thank you. So yeah, we will share with all of you, we’ll put those URLs in the Zoom, and we’ll also share them with you at the end of the meeting if you want, we’ll give you all the URLs. So this was a look at the CBPR, which is one of the systems that is trying to become a global standard for data. Now we are going to hear another presentation for another system that is not, I mean it’s not the same, but it’s also being used as an example for what could be an approach to global data governance, which is the Digital Economic Partnership Agreement, the DEPA model. So we are going to have a presentation coming in online from Pablo Trigo-Kramchak, who is going to speak from Chile, and I think he will be joining in directly, so I think I’m just going to switch off this mic as soon as we confirm that we got their audio.

Pablo Trigo Kramchak:
Can you hear me? Okay, Javier? Yes, I think we can hear you. Can you hear me? Javier, can you hear me? Yeah, I don’t. Okay, great. Yeah. Okay, thank you. Thank you very much. Well, my name is Pablo Trigo-Kramchak, I’m a researcher at the University of Chile Faculty of Law, and I’m going to present some, briefly, some of the elements and findings of a report that we have prepared on the Digital Economic Partnership Agreement on its approach to cross-border data flows. It just, well, this study has been developed thanks to the support of the Digital Trade Alliance. Well, first, some context. In the modern data-centric digital economy, as you know, the collection, processing, and sharing of personal data plays a central role, and data flows are a foundation of international digital trade. And despite the increasing relevance of this topic, it’s not yet possible to achieve an international consensus to comprehensively tackle these diverse aspects of digital trade at the multilateral level. As a result, it has become more common to find digital trade provisions incorporated in new FTAs, resulting in what is often described as a spaghetti bowl on regulation in the digital trade sphere. Privacy and data protection concerns have gained increased prominence in negotiations, but the intricacies of data governance make the landscape quite complex. What further complicates matters is that the three major global players, the United States, the European Union, and China adopt distinct approaches to data governance. It was mentioned before by other speakers. That is very clear. The U.S. takes a sectoral approach, allowing businesses to set rules and regulate privacy. The European Union strictly safeguards personal data under fundamental rights law and through comprehensive domestic regulations. And the approach offers, this approach offers a robust personal data protection and is not open for negotiation. On the other hand, China has implemented strict regulations for personal data protections, aiming to boost its data-driven economy and internal security. Well, the Asia-Pacific countries have adopted some of the most advanced agreements focused on digital trade, such as the U.S.-Japan Digital Trade Agreement, the Singapore-Australia Digital Economy Agreement, the SADEA, and the Digital Economy Partnership Agreement, the DIP. But it’s also important to keep in mind that, for example, the CPTPP contains an e-commerce chapter that applies to measures that affect trade by electronic means, a concept not defined, including provisions on personal information protection and cross-border transfer of information by electronic means, among other issues. Well, DIPA was signed in 2020 among Chile, New Zealand, and Singapore, and is one of the first comprehensive international agreements on digital commerce. And during the negotiation process of these agreements, parties constantly refer to their intention to delineate an adequate framework for the progressive, available, and safe implementation of emerging technologies, including the governance of certain activities that underpin these technologies, such as cross-border data transfers. Nonetheless, many DIPA provisions refer to non-binding commitments, starting points, or preliminary roadmaps for future collaboration. In this sense, DIPA has been specially conceived and designed as a pathfinder to influence and contribute to multilateral trade negotiations on digital trade by means of its flexible language and modular structure. It’s to be noted also that DIPA parties envision this instrument as a model for possible WTO e-commerce initiatives, as well as digital economy efforts within the EIBIT Forum and other international bodies. But what are the questions? The main question that this report is trying to solve and address is that in this scenario, the question arises whether DIPA, one of the pioneering comprehensive international agreements on digital trade, could be considered a pathfinder in shaping global rules for cross-border data flows. DIPA is frequently considered an innovative FTA, especially in terms of its adaptable design and modular approach. And in this sense, new parties can determine the extent of their commitment without being bound to fully embrace the entirety of this agreement, of its provisions. Well, the elements that we keep in mind when developing this report, these studies, these researches, is that the purpose of these studies was to analyze how the DIPA approach can shape and guide future negotiation and international governance rules on cross-border data flows and to determine whether DIPA provisions constrain governments from adopting their own standards on personal data transfers, identifying the possible added value of DIPA provisions. When examining, taking into rules concerning data flows governance, DIPA closely aligns with the approach championed by the United States during the TPP negotiation, and that were the basis of the CPTPP agreement. Even though the United States is not a participant in the CPTPP, well, the provisions draw heavily from the TPP, where the U.S. played a significant role in shaping the negotiation process. This similarity might be attributed to the brief negotiation period for DIPA. It took just some months, which inevitably required drawing heavily on existing agreements. For future accession process, this factor, to replicate the format of the language contained in older agreements, especially the TPP, the CPTPP, is problematic. Countries that are not signatories to the CPTPP may have reservations about adopting these provisions for many reasons, political, economic, and social. This circumstance could affect the possibility for certain countries seeking to join DIPA to accept all its models. It should be noted that DIPA provisions related to governance of data flows contained in Article 4.3 affirm the parties’ previous levels of commitment contained in older agreements. This is crucial. This is very important. And among other effects, this will imply a reference to the commitments made by the three original signatories to DIPA in the CPTPP, to which they are also partners. Regarding our main findings in this research, we can see that the DIPA rules governing cross-border data flows take verbatim CPTPP cross-border transfer of information provision, also affirming parties’ previous levels of commitment contained in older agreements. As highlighted in our report, this situation can pose significant challenges. Questions arise about which prior agreements will set parties’ level of commitments relating to cross-border transfers, especially when there may exist inconsistent or contradictory rules at this stage. The complexity becomes more accentuated when considering countries that are not part of the CPTPP. And this factor could affect the chances of new DIPA parties embracing all these models. Despite DIPA personal information protection provisions contained in Article 4.2 being more detailed than the CPTPP text, they fail to set the minimum standards. And furthermore, DIPA strongly promotes interoperability through the adoption of mutual recognition of voluntary self-regulatory approaches, which could be considered in some way equivalent to the implementation of comprehensive or sectoral privacy or data protection rules. And this in some way affects heavily the impact, the added value of DIPA in terms of protecting consumers’ rights, users’ rights in digital environments. It’s difficult to claim that DIPA could be considered a trailblazer for future cross-border data flow relations. However, two issues deserve our attention. The first is that because of its modular approach and uncompromising wording, DIPA is an agreement that arouse growing interest worldwide. Even not just in the Asia-Pacific region, you can see that this is generating some interest even in Europe. The United Kingdom expressed some interest in being part of DIPA. The second is that even if no concrete commitments are made regarding data flows, this does not mean that DIPA declarations cannot have any legal relevance. On the contrary, different legal effects could derive from these declarations, especially as more countries join the agreement. In this context, it’s important to consider that this treaty is inserted in a broader context intertwined with other trade agreements in which DIPA parties are engaged. And a statement made into DIPA could be considered an international dispute settlement, for example, even when the dispute does not emanate directly from DIPA’s specific provisions. Moreover, these statements could play a significant role in resolving disputes arising from breaches of other commitments made within DIPA that are not excluded from the dispute settlement model when the crux of the matter pertains not to correct interpretation or application, for example, Article 4.3. It’s worth noting that while dispute settlement models do not extend to Article 4.3, the cross-border transfer of information by electronic means, it is indeed applicable to Article 4.2, protection of personal information, which, for example, states in paragraph 10 that the parties shall endeavour to mutually recognise the other party’s data protection trust marks as a valid mechanism to facilitate cross-border information transfer while protecting personal information. This is a connection, for example, you can consider the previous presentation of the previous speaker regarding the APEC system, CBPR system that is based in this trust mark with this self-certification scheme model. To sum up, regarding cross-border data flows, DIPA does not forge a new path but rather follows the trajectory set by the US. This circumstance has a decisive impact on the added value offered by DIPA, by this digital trade agreement. If we consider that DIPA has been specially conceived and designed as a pathfinder to influence and contribute to multilateral trade negotiations on digital trade, it’s not difficult to imagine that a broad accession or replication of these terms and provisions could end up producing a de facto demonisation under the US data governance model. Thank you very much for your attention. Well, if you want to see, to check the full report, you can find it in the Digital Trade Alliance website. I’m going to copy in the chat section of Zoom to complete the link to this report. And, well, if you have any kind of questions, please, you can see there my email address. And, well, I’m open to any kind of question or comments. Thank you very much.

Javier Ruiz Diaz:
Thanks, Pablo, for such a comprehensive overview. So, we have first looked at a system of certification, a system where countries agree that companies can get private certification and that certification can be used to send data across borders. That’s one of the models that we have. The next model we have is a modular trade agreement, but it’s not really a trade agreement. It’s like a collection of individual commitments where countries can pick and mix and make their own combination. But, as we’ve seen from the research, there are some questions as to how that modular approach works in the sense that some of those partial commitments apparently could involve buying wholesale the previous regulatory regimes of the founding members of the DEPA, which brings us back to the fact that those are in CPTPP, in the Trans-Pacific Partnership Agreement, and so they may not be actually that new. So, that is the discussion. Next, we are going to look at the third model. We are going to look at it coming from this region. It’s something that is quite different. It’s the Indo-Pacific Framework for Prosperity, which is a new kind of agreement that is not just a new type of trade agreement. It’s not even technically a trade agreement. We are going to hear a presentation online from NAN, from Engage Media, which is a civil society organization very active in this whole area, in this region. So, NAN, could we check? Do you have access to the Zoom? Yes. Can you see my slide? Yes, we can see the slides. We can hear you loud and clear. Thank you.

NAN:
Perfect. Thank you very much for having me. My name is NAN. I’m a digital rights project coordinator at Engage Media. We advocate for digital rights and digital safety. in South and Southeast Asia. So today I’d like to talk a little bit about the IPATH. So thank you for the introduction there. Engage Media is part also of the Digital Trade Alliance. So when it comes to IPATH, or Indo-Pacific Economic Framework for Prosperity, this not-so-much trade agreement involves 14 countries, mainly the U.S., India, some countries in Oceania, including Japan and East Asia as well, and a lot of countries in Southeast Asia. And what’s very interesting about this treaty is that the U.S. government shares all chapters and controls the text of the IPATH. It began a few years back, and it’s expected to conclude by November 2023. Unlike other FTAs, the IPATH will not offer market access and GSP privileges. There are four pillars to this free trade agreement, and the digital trade chapter is not publicly available, and negotiation is exclusive and secretive. This also includes an enforcement mechanism, although the U.S. will have the ability to conduct inquiries against any violations. Review of public comments processes in Australia and the U.S. and media statements of big tech companies have raised a lot of issues by big tech companies. The issues that were raised are limit measures that restrict cross-border data flows and, secondly, prevent disclosure of source code and algorithms, and, thirdly, remove any requirements for establishment of local offices and local representatives. The U.S., Mexico, Canada FTA, or hereby in the U.S. MCA, is explicitly cited as a baseline for commitments in the IPATH. Now, in IPATH, corporate interests dominate the U.S. trade advisory system. Eighty-four percent of U.S. trade advisors represent business interests. Sixty-nine percent of those advisors represent large corporations and their trade associations. And, as you can see, extensive lobbying by big tech companies are involved, and the provisions in the IPATH are very big tech friendly. U.S. trade representatives have solicited advice of big tech. We have evidence on that, on the digital trade provisions. And should comparable proposals in the digital trade within the trade pillar resemble those found in the U.S. MCA, there could be significance on the digital rights in terms of transparency, accountability, and the ability to ensure that technology is used in a way that respects digital rights. So some of the issues that I’d like to focus on is, first, if IPATH leveraged the model of the U.S. MCA, it will have enforceable cross-data flows requirements, generic measures such as that of Thailand’s Personal Data Protection Act. A local law will almost certainly fall foul with the U.S. MCA’s style of free flow data provisions. Domestic measures aim at enhancing privacy and security of data, as well as providing regulatory access to data could therefore be affected by this IPATH provision. Restrictions on cross-border transfer can be used to protect the privacy, of course, and ensuring access to enforcement mechanism, particularly the EU’s GDPR and numerous jurisdictions that implement specific measures on pertaining to health, telecom, and mapping or financial data. It will also enhance administrative efficiency and improve domestic law enforcement and promote economic and strategy purposes, meaning domestic capacity and cost storage, taxation, etc. So the implications of this provision will make it difficult to introduce any domestic measures to restrict cross-border data transfer. It will narrow the scope of exceptions. Necessity and proportionality requirements are very high bars to meet in IPATH, and so the requirements for pre-transfer consent could be very hard to be met. In the ultimate analysis, such provisions help data flow to countries with poor data protection standards, for example, the U.S. And while the debate surrounding restrictions on cross-border data flow is ongoing, because while it facilitates states to carry out certain elements of regulatory work, data localization will also impose barriers on firms for big data and cloud computing in decision-making and lower the efficiency of their operations. So there are valid concerns and arguments on both sides. The next issue that IPATH will likely raise is the establishment of safeguards against forced source code disclosure as a condition to market access. Countries in Southeast Asia and South Asia as well are still developing regulatory responses to the use of algorithms. For example, Indonesia is now ongoing with its AI ethics policy. One tool of regulation is ensuring greater transparency and accountability over how algorithms and software in general work. With this provision, it will restrict various tools available to a state to promote competition and fairness in the digital economy. Preventing such disclosure in the future may lead to algorithmic discrimination in areas like employment policies, insurance policies, or search engine rankings, which will have an effect on the competitiveness of smaller businesses in the global South. Comparatively, the RCEP does not contain an analogous clause, and the CPTPP prohibition on disclosure only applies to source code and not on algorithms. It does not require an investigation to have been initiated or recognize that a party may require a modification, but in IPATH, it will. And so this is a trajectory of a stricter deregulation of disclosure. And I’m quite sure that everyone is aware of the danger of algorithm non-disclosure. Of course, it will limit the ability for independent ex-ante verification of how a software product works. It can be essential to ensure that software-based products and services function as they are meant to do and limit the risk arising from the use of software, as well as limiting the black box issue with AI. And secrecy goes against the developing regulatory consensus on the use of AI tools. Explainability, robustness, security, and safety are key design principles put forward by the OECD AI Policy Observatory. A number of proposed laws seek to ensure pre-deployment verification of software and AI, for example, the American Data Privacy and Protection Act, which requires to conduct AI impact assessment, including design of algorithms. In USMCA, the provision has certain general exceptions, but ultimately it implies that source code and algorithms contained in software products cannot be accessed by a regulator until an inquiry has been initiated into an identified malpractice. And that’s very, it’s a slippery slope. It’s also worth noting that the RCEP does not contain, again, the analog clause restricting the disclosure, while Article 14.17 of CPTPP applies only to, again, source code and not algorithm, but also it does not specifically limit access to a source code to instance where an investigation has been initiated. So limiting the ability of parties who require changes to algorithm and source code that could be found to be biased and otherwise harm individuals is something that will likely happen should this provision be included in IPEF. The non-disclosure could also hinder the trajectory of AI regulation at the regional level and also at national level. And I’d like to also point out that here’s me participating in the IPEF negotiating rounds. So as I mentioned, the negotiations are completely confidential. However, we do provide stakeholder listening sessions to which I was a part of in the fifth round of negotiations in Bangkok. And I have raised multiple concerns regarding the violation of digital rights should IPEF take the North American trade agreement model and the codification of it. And I just like to share that after I shared my intervention at the stakeholder listening session, a U.S. trade representative from the embassy actually reached out. However, in that intervention, I specifically targeted the Thai trade representatives because it was quite clear to us that Southeast Asian nations or signatories to this trade agreement is not gaining much but are losing more. And so I was targeting the Thai trade reps in particular on the digital rights issues. And yeah, to wrap up, the codification of USMCA-like provisions will limit the regulatory options available to the signatories to implement public or consumer interest regulation over the digital ecosystem. The free flow of data classes poses a limit. The ability of countries to implement localization norms and the inclusion of such classes would allow for the continual flow of data to the U.S. where it would be subject to relatively lower standard of data protection norms. Additionally, provisions restricting access to source code and algorithm will limit the ability of regulators and independent entities to scrutinize and conduct external assessment or audit on the software products prior to their deployment. This has many challenges, as I mentioned before. In particular, the gig economy also and labor issues. And limiting the ability to properly audit AI system is premature, so it could in the future limit attempts at ensuring safety and security and fairness of AI tools, which is something that I’d like to highlight here. And closing remarks is the FDA provisions that seek to preemptively limit the ability for states and regulators to implement public interest or consumer interest regulation in this digital space is something that we need to push back. And regulatory frameworks concerning the digital ecosystem are still in the nascent state in many Southeast Asian countries.

Javier Ruiz Diaz:
And with technology being rapidly changing, putting these stipulations and provisions in the FDA will restrict certification. If it’s very hard to get a European Union decision, let’s go for certification. So these models are really becoming global. They are going way beyond the region. But now, if we don’t have any more questions, we are going to move into the next part of the discussion, which is to try to… So we’ve mapped the kind of regional initiatives. Now what we want to do, we are going to do a little tour of the region, where we are going to get representatives from consumer and digital rights groups to give us a little context of what are the more pressing issues, so we can see how these regulations will basically touch the reality on the ground in some of the key countries. We don’t have people from all the countries, so don’t feel if you are from a country in the region there is no one here. It’s not by design, it’s probably because we couldn’t find, but of course feel free later on to speak. And I think the idea is that we want this to be participatory and to get your input. And the same thing for our colleagues online. If you want to raise any questions, please put it in the chat, and then we will get someone to read it out for you here in the room. So we are going to start with Japan, and I’m going to give the floor to Amy Kato from Consumer Japan to start giving us an overview, and then we’ll continue with Korea, then Philippines, and then we are going to… Okay, we are being asked about having a break. Okay, so let’s do this thing. We are going to take a break because I think three hours, you know, I can see that no one is as committed to the cause here to sit for three hours, you know. So we’ll take, should we take, being realistic, should we say we’ll reconvene at quarter past, and hopefully, you know, we know who you are. If I see you out there later, you know, there’s not, and you didn’t come back, you know, I’m going to be like pointing at you. So we are going to take a little break, so you can grab some water, maybe go to the restrooms, and then we’ll reconvene at quarter past. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Amy Kato

Speech speed

175 words per minute

Speech length

13 words

Speech time

4 secs

Jam Jacob

Speech speed

132 words per minute

Speech length

2445 words

Speech time

1111 secs

Javier Ruiz Diaz

Speech speed

158 words per minute

Speech length

1530 words

Speech time

580 secs

Minako Morita-Jaeger

Speech speed

124 words per minute

Speech length

1642 words

Speech time

794 secs

NAN

Speech speed

122 words per minute

Speech length

1693 words

Speech time

831 secs

Pablo Trigo Kramchak

Speech speed

130 words per minute

Speech length

1783 words

Speech time

825 secs

Paula Martins

Speech speed

152 words per minute

Speech length

367 words

Speech time

145 secs

Building Resilient Infrastructure | IGF 2023 Day 0 Event #203

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Seth Ayers

The escalating threat of climate change is disproportionately impacting developing nations, with estimates suggesting that extreme weather events could push as many as 130 million individuals into severe poverty. This downturn in living standards is leading to mass migration, altering economic and social dynamics in numerous countries.

On a brighter note, the advantages of resilient infrastructure, particularly in these developing nations, have been greatly emphasised. Every dollar invested in enhancing the resilience of infrastructure projects is believed to generate a fourfold return. This figure highlights the immense potential that resilient infrastructure offers for social and economic development, which could help counteract the adverse effects of climate change, decrease the level of poverty and stem migration.

Digital technologies represent another vital tool in combatting climate change. About half of the developing nations view digital technologies as an integral driver for mitigating the impacts of climate change. Furthermore, approximately 75% of countries deem these technologies essential in their adaptation strategies to climate change.

However, there is a glaring digital divide, as roughly a third of the world’s population remains offline. Countries with access to digital technologies can deliver services to their citizens three times faster than those without such advancements. This stark disparity underscores the immediate need for greater investment to bridge this digital divide and address the issue of insufficient internet access.

The concept of ‘greening’ the telecom infrastructure has been proposed as a fundamental response to climate change. The World Bank suggests two approaches: ‘greening digital,’ which involves making telecom infrastructure adaptable to climate change, and ‘greening with digital,’ which refers to the use of digital technologies to help reduce carbon emissions. The efficient implementation of these innovative strategies could combat the impending threats posed by climate change.

In addition, the ‘Lifeline Report,’ published by the World Bank in 2019, is notably significant in this context. This report assesses various forms of critical infrastructure through comprehensive country case studies and underlines the ‘one in four’ return on investment ratio for resilient infrastructure.

Open source data is acknowledged as essential for evaluations and the implementation of strategies, particularly by the World Bank. This institution utilises open source applications and data for in-country evaluations.

Lastly, there is a degree of uncertainty surrounding NetBlocks as a data source. Regardless, the analysis clearly demonstrates that urgent and strategic actions, particularly in the realms of resilient infrastructure, digital technologies, cybersecurity, and open data utilisation, are prerequisites in our fight against climate change and worldwide socio-economic challenges.

Tomohiro Otani

The analysis presents a vigorous, positive sentiment concerning the strategies prepared for disaster recovery and network environment monitoring. This readiness extends both locally and globally. Japan employs a robust strategy, operating a network of 12 centres spread out nationally, with the primary units located in Tokyo and Osaka. Advantageously, the time difference between Asian and European regions is leveraged for continuous global operations.

A notable facet of their strategy is the innovative use of advanced technology in disaster recovery. This includes a disaster recovery tool which proves instrumental in monitoring real-time situations and promptly coordinating teams to fix network failures. The disaster countermeasure dashboard efficiently collects data needed to delegate team members, considering the extent of environmental damage. Big data-based disaster management systems aid in simplifying the understanding of the situation’s scale and complexity. In conjunction with these technologies, drones are employed for remote monitoring, further bolstering recovery procedures.

Moreover, there is meticulous planning for recovery and continuity in cases of disasters. This comprises provisions for operators to download vital information to their devices before going on-site, crucial if telecommunication services fail. Also, no terrain is off-limits for network recovery efforts, including land, sky, sea, and even space.

Furthermore, regular training for disaster recovery and boosting network resilience is a key aspect of the strategy. This involves collaborations with various public sectors and municipalities, aiming not just to restore connections, but also to bring a sense of relief and positivity to the affected population.

The ongoing assurances to strengthen internet access were also underscored. Japanese operators are diligently constructing a 5G network nationwide, with the results showing substantial progress; over 90% availability has been realised in 5G coverage. However, Tomohiro Otani noted the disparities in coverage and speed between mobile and fixed services and conventions such as 4G and 5G. Otani suggests referring to the Ministry of Internal Affairs and Communications (MIC) website for precise figures on coverage and internet speed.

In conclusion, this widespread investment in disaster recovery, utilisation of cutting-edge technology, comprehensive continuity planning, and ongoing training, coupled with an ambitious 5G rollout programme, illustrates a progressive approach towards safeguarding and enhancing Japan’s digital infrastructure.

Roderic S. Santiago

PLDT Smart, awarded the title of fastest mobile network in the Philippines by GLOMO, has prioritised disaster resilience and sustainability. This aligns their corporate objectives with Sustainable Development Goals (SDGs) 9 and 11, which pertain to industry, innovation, infrastructure, and sustainable cities. Headed by Eric Santiago, the company has implemented a variety of measures to optimise network performance while ensuring service sustainability and continuity during calamities.

Harnessing renewable energy, Smart has set up solar-powered sites, particularly beneficial for the Philippines, a nation frequently tormented by approximately 20 typhoons annually. This innovative approach significantly emphasises the necessity for a robust and resilient network that can maintain function during such adverse times. Additional initiatives include the Emergency Cell Broadcast System and Smart Satellite, technologies which are pivotal for disaster response and ensuring continuity of service.

Moreover, recognising the importance of education regarding disaster preparedness, systematic efforts have been made to utilise different modes of learning. This includes devising and distributing short online videos imparting essential knowledge about disaster responses. These are disseminated widely through websites and text messages. However, acknowledging the challenges of digital literacy among the population, face-to-face learning initiatives have also been instigated. Certain areas are targeted using caravans to provide hands-on demonstrations, ensuring education is reach-inclusive, even for those who aren’t tech-savvy.

An interesting concept in action is intergenerational learning, leveraging young people’s updated knowledge and adaptability. Youth are encouraged to teach their older family members about disaster preparedness, leading to increased household awareness.

In conclusion, the actions of PLDT Smart reflect a comprehensive approach towards disaster resilience, established through technological innovation and extensive education efforts. Their strategies highlight the practical intersection of multiple SDGs, reciprocally integrating objectives focusing on industry, infrastructure, and urban resilience with education. It’s an exemplary model, demonstrating the potential synergies achievable through incorporating various SDGs in strategy formation and execution.

Ken Katayama

Ken Katayama inaugurated the session with a warm and welcoming introduction before seamlessly transitioning into his role as moderator for the discussion on ‘Building Resilient Infrastructure’, conducted in Kyoto. His commendable affiliations include the distinguished Keio University Global Research Institute and the globally recognised Toyota Motor Corporation. This clearly establishes his depth of knowledge in Industry, Innovation and Infrastructure, and Quality Education, underscored by the primary themes of SDGs 9 and 4.

In a bid to maintain the efficiency and structure of the proceedings, Ken designated specific time allotments for speakers. Each contributor was assigned an eight-minute slot for their presentation, whilst a consolidated time of fifteen minutes was set aside for the entire Japanese delegation. This arrangement reflected Ken’s adept management skills and his emphasis on time efficiency, exemplifying a well-organised and succinct session.

In alignment with the principles of Quality Education (SDG 4), Ken championed interactive learning by encouraging attendees to participate actively. He specifically acknowledged Sugimoto-san’s potential to make valuable contributions to the conversation, thus fostering diverse viewpoints on the topics discussed.

With its focus on cultivating innovative solutions to reinforce resilient infrastructure and nurture sustainable cities and communities, the session manifested its alignment with SDGs 9 and 11. Inclusive and engaging moderation, alongside efficient time management, demonstrated Ken’s commitment to a productive dialogue.

In conclusion, Ken Katayama’s proficient moderation exemplified a well-structured, interactive dialogue centred on the development of resilient infrastructure. His prioritisation of effective time management, the promotion of audience interaction, and affiliation with impactful institutes highlighted his dedication to innovation, infrastructure development, and quality education. His work attests to the interconnected nature of these goals.

Masayoshi Morita

In 2011, the devastating Great East Japan Earthquake caused extensive damage to the nation, particularly impacting the critical communications infrastructure. This catastrophic event resulted in a worrying total of 385 communication buildings going offline, creating immense hurdles for the country’s emergency response systems. Additionally, sixteen communication buildings were severely damaged, and a staggering 1.5 million power lines were severed. This disaster starkly highlighted the vulnerability of Japan’s communication infrastructure to such destructive natural events and underscored the urgent necessity for efficacious and efficient disaster response strategies.

However, demonstrating fantastic resilience, the NTT group exhibited a robust response to the catastrophe. They mobilised an impressive workforce of 10,000, which remarkably enabled the restorative efforts of all affected communication buildings within a span of just 50 days, given the massive scale of devastation. This was primarily achieved by leveraging satellite communication devices and installing mobile base stations in the affected regions, establishing a vital lifeline in the mitigation of the overall aftermath of the disaster.

Learning from the calamitous event, the company has henceforth implemented several preventive measures to optimise their disaster response strategies. Key among these measures is the strategic initiative of relocating communication buildings and cables further inland or onto hillsides, thus reducing the risk of direct impact from tsunamis and floods. Innovative technologies, such as drones, have also been sought to predict potential disaster areas and plan efficient recovery procedures, thereby, significantly enhancing their disaster management strategy. Furthermore, a renewed emphasis on training, incorporating disaster response simulations and joint training initiatives with the Self-Defense Force, has been introduced to ensure a well-prepared, adept response team.

The company’s proactive approach aligns perfectly with two of the United Nations Sustainable Development Goals (SDGs), notably SDG 9: Industry, Innovation and Infrastructure and SDG 11: Sustainable Cities and Communities. By prioritising innovation in disaster management and developing resilient infrastructure, alongside creating sustainable and safe urban spaces, the strategies clearly embody these objectives.

In conclusion, the aftermath of the Great East Japan Earthquake illuminates the essential importance of comprehensive, effective disaster management strategies within the field of communications infrastructure, emphasising the pivotal role industry innovation plays in enhancing resilience against natural disasters.

Audience

The panel discussion encompassed a wide array of topics, with a principal focus on cyber security, disaster control, training, and telecommunications. Attending the audience, Sasaki Motsumura serves Workforce Development in the cyber security division at NICT. He underlined the challenge of raising awareness and pre-emptive preparation for potential incidents. Drawing comparisons with disaster control, the significance of hands-on training and simulation exercises were highlighted as means to boost awareness and preparedness. Specifically, the analogy between disaster prevention and control when addressing issues of cyber security was brought forth.

Regarding the governmental structure of Australia, the discussion revolved around their collaboration with the telecommunications industry. It was observed that both the federal government and the states/territories take on separate roles when interacting with the telecommunications sector. Furthermore, throughout exigencies such as bushfires and floods, the Australian government reportedly liaises with the telecom industry on an individual case basis, illustrating a tailor-made crisis management approach rather than a blanket policy.

Investment and finance also held a vital place in the conversation. In particular, the need for deciphering the return on investment was spotlighted. A bold proposition suggesting every £1 of investment yields £4 return invited the audience’s scepticism. This underscored the necessity for a clear conception of return on investments, specifically in the broader context of national infrastructure.

An enquiry was also raised about the status of the internet in Japan, focusing specifically on coverage and speed outside of emergency situations. This line of questioning shows the audience’s interest in understanding standard operational procedures for internet access in Japan and its potential performance during a crisis.

Regarding education, questions revolved around strategies to inculcate resilience and effective communication among the population. It was advocated that for successful long-term benefits, a deep understanding and transparent reflection of investments in education and communication are crucial. This connects back to the previous enquiry about understanding return on investments and indicates a more comprehensive concern about resource distribution in these areas.

To sum up, the discussion yielded significant insights into disaster management, cybersecurity, infrastructure investment, and the education system. It underlined the essentiality for a clear understanding of investments, the significance of public education, and the critical role training and exercises play in cyber defence and disaster control.

Yasuhiro Otsuka

Situated in a region prone to natural disasters, Japan frequently contends with severe disturbances to its communication services. A significant 20% of global earthquakes with a magnitude of six or above occur in the country’s vicinity. These intense seismic activities, coupled with destructive typhoons, often trigger heavy rainfall, flooding, and landslides. These severe weather patterns subsequently cause drastic interruptions to the country’s communication networks.

The continuous provision of communication services has become integral to our modern lifestyles and the smooth operation of economic activities. Our societies’ growing dependence on these services emphasises the urgent need for resilient networks to withstand the frequent natural disasters that Japan experiences.

Taking heed of this call for resilience, the Ministry of Internal Affairs and Communications (MIC) in Japan has implemented revised technical standards. The aim of these changes is to extend the operational times of major base stations, thereby fortifying the stability of communication services across the country. These measures have facilitated the establishment of more than 9,000 mobile base stations, capable of continuous operation for 24 hours or longer. In addition, mobile power supply vehicles and portable generators have been deployed nationwide as part of a broader disaster response strategy. Notably, these advancements align with the United Nations’ Sustainable Development Goals (SDGs) on industry, innovation, and infrastructure enhancement.

The MIC has also recognised the need for collaborative approaches to manage natural disasters. This has led to the establishment of partnerships with various government agencies, local municipalities, and public utility operators. These collaborative efforts aim to strengthen disaster resilience in Japan by leveraging the combined capabilities of different sectors. Platforms have been set up to facilitate collaboration on critical elements such as electricity, power, and fuel distribution, as well as the removal of obstacles on roads following disasters.

In conclusion, as Japan grapples with its susceptibility to natural disasters, the country is making positive strides towards industry innovation and infrastructure resilience. The vital role of communication services in contemporary society has been acknowledged, and a strong focus is placed on maintaining these services amidst natural disasters. This collaborative approach, which involves various sectors, is a significant step towards achieving the United Nations’ SDGs related to sustainable cities, communities, and infrastructural innovation.

Tara Konarzewki

Australia is grappling with a rise in extreme weather events, evidenced by the Australian Bureau of Meteorology. The summer of 2019-2020 witnessed widespread bushfires, causing extensive devastation and significantly impacting the nation’s telecommunications. These events, coupled with a recurrent pattern of destructive weather, underscore a pressing need for robust disaster resilience strategies.

The mandate for handling such disasters is shared amongst several entities. The federal government is in charge of managing policy and regulatory frameworks, whilst state and territory governments are charged with handling disaster response. Concurrently, the direct operation and maintenance of telecommunications networks fall upon the carriers themselves.

Key efforts towards strengthening disaster resilience include the Better Connectivity Plan by the Australian government. This initiative, supporting Goals 9 and 13 of the Sustainable Development Goals (Industry, Innovation, and Infrastructure, and Climate Action), devotes over $1.1 billion to rural and regional communities in Australia. The plan incorporates numerous measures to fortify resilience against the natural disasters that Australia routinely faces.

Furthermore, Australia’s federal structure significantly influences disaster control and engagement methods. Incidents are tackled on a case-by-case basis, necessitating cooperation between the government and the telecommunications industry. This structure calls for event-specific planning given the cyclical nature of bushfires and floods at specified times of the year.

Overall, whilst there’s escalating action towards addressing the urgent issue of climate-induced disasters, more targeted planning and collaborative efforts between the government and telecommunication providers could boost Australia’s resilience to these extreme weather events. Long-tail keywords included in this summary include Australian government, extreme weather events, telecommunications, bushfires, disaster resilience strategies, Better Connectivity Plan, Sustainable Development Goals, Industry Innovation and Infrastructure, Climate Action, rural and regional communities, and federal structure.

Session transcript

Ken Katayama:
as well, too. Thank you. じゃあ始めちゃいますね。じゃあ、帝国になりましたので始めさせていただければと思います。 Now it’s 12 o’clock, so let’s get started on time. As I mentioned, for those who are on the outside, you’re more than welcome to sit with us. 外側に座っている方々、是非別に一緒に中に座っていただいても全然結構でございますので、 是非インタークティブなディスカッションをできればと思います。 So I’ll be the moderate… Wait, I don’t have… 私、通訳しなくてもいいんですよ。 I don’t have to translate for myself, right? Okay, so for the benefit of my non-Japanese speakers, welcome to Kyoto and welcome to the Building Resilient Infrastructure session. My name is Ken Katayama. I’ll be moderating in my role as the Keio University Global Research Institute role. My day job is I work for Toyota Motor Corporation. We’re a mobility company, but today I’ll be speaking in my role as Keio University. Thank you, Otsuka-san, for giving me the opportunity to be able to moderate this session. We’d like to keep this session on schedule. I’ve asked my speakers to keep to eight minutes each. The Japanese delegation of Otsuka-san and Morita-san and Otani-san will be speaking 15 minutes in total, and then Eric on my right will be speaking eight minutes, and then we have a speaker from Australia also doing eight minutes. So we want to provide also an opportunity for each of our speakers to be able to re-comment on some of the other things that they’ve heard, and also especially give the audience, like Tsukimoto-san, an opportunity to comment. So if I may, Seth, are you ready to speak? While you’re getting ready to… and are you online, Seth? Yes, I am. Can you hear me okay? Can everybody hear Seth? I guess you can, right? Seth, can you say something again? Yes. Good. Okay, so now… Yes, hi everyone. Great. Now I think also we have to get your slide on the screen. We can hear your voice, but… I think we have to show the slides of the first speaker, Seth. Ohtani-san, right? I’m glad I got it right. Do we have Seth’s slides up? I’m still trying to use some time. So it doesn’t seem like the slides are up. I’m still going to use some more time. The reason why I asked my former colleague Sugimoto-san from NICT… Do you know Sugimoto-san? She’s really super, super sharp. And so she used to work at NISC, which is our cyber security agency. And also she did privacy work at MICC before that as well too. So I know that she’s probably in a right position to make a comment. Because we want a comment. So Seth… Okay, so Seth, why don’t you just start talking… And we’ll figure out the slides as we go along.

Seth Ayers:
Please. Sure. Sure. Sounds good. Okay. All right. Good afternoon, everyone. Thank you very much for this opportunity to participate in this discussion. And I’m sorry that I can’t be there in person with all of you in Kyoto, but very happy to have the opportunity to participate virtually. My name is Seth Ayers. I lead a business line in the World Bank that focuses on the nexus between digital technologies and climate change. And for the presentation today, I’ll talk a bit about how we think about these issues, the overlapping issues of digital and climate change and resilient infrastructure in particular as part of addressing the connectivity challenges that we’re facing globally. And I see it looks like my first slide, at least the cover slide is up. And actually, if you could please go to the next slide. Thank you. Great. So I wanted to begin the presentation just by giving a bit of context with a couple of key statistics on issues about climate change that many people may be aware of. Certainly climate change is impacting all countries globally, but for developing countries, the impact is far greater. And estimates that up to 130 million people will be pushed into poverty who currently are not in poverty. So they’re above the poverty line at the moment, but because of these severe weather events, whether it’s flooding or droughts are gonna be pushed into a poverty situation. And then we’re going to also, the predictions are for massive migration patterns. So significant changes to people’s economic and social situations as a result. of climate change. These challenges are particularly acute for small island developing states for reasons that many people would likely be aware because of their low-lying nature. They’re particularly susceptible to a number of severe events, such as rising sea levels and intensified cyclones and hurricanes and storm events. And on the positive side, the work that we’ve done has identified that if you do make an investment into making your digital infrastructure or any sort of infrastructure in general more resilient, that there’s a massive benefit for doing so. For every dollar that’s invested into making infrastructure more resilient, you actually get a $4 return. So significant upside to making your infrastructure more resilient. Next slide, please. OK. So the other thing I wanted to flag was that many developing countries are recognizing the significant potential that digital technologies. So when we talk about digital technologies, it’s telecom infrastructure, data infrastructure, as well as being able to use these technologies through digital skills. So developing countries are recognizing that digital technologies are going to be fundamental to both addressing our climate challenges as well as making sure that countries are able to adapt to these new dynamics. And so here is just we did a review of the nationally determined contributions of developing countries. And you can see on the mitigation side that nearly 50% of the countries look at digital technologies as a key driver. And in the case of adaptation, so helping countries adopt to climate change, nearly 75% of countries are putting digital technologies as a key driver. Next slide, please. So if we recognize that digital technologies are critical to tackling climate change, and that we know we need to make this infrastructure more resilient, what are some of the challenges that we face in building this infrastructure globally? So as we went through the pandemic, and we were reliant on doing more virtual activities in countries in which digital technologies existed, they were able to deliver services to their citizens at a rate of three times. They were able to get services to three times the number of individuals than countries that did not have these digital technologies. So whether you’re dealing with a pandemic or a severe climate event, having digital infrastructure is essential for service delivery. Yet there’s about 3 billion people globally who do not have access to the internet. So about a third of the world’s population is not online. So that’s a huge challenge is if we recognize the power of digital, both for not just ensuring service delivery and helping people adapt to climate change, but also as a tool for helping high emitting sectors such as transport and energy reduce emissions, we’ve got to address this connectivity gap and to do so in a sustainable and green way. Next slide, please. Okay, so at the World Bank, we’re tackling this issue on two fronts. We have what we call greening digital, so ways in which we look at the sector itself. And this is, I’m gonna dive a bit deeper into some of the work that we do here on resilient infrastructure. So when we talk about greening digital, it’s greening the digital infrastructure, both from a resilience standpoint on the adaptation side, and then also from a mitigation perspective. So the digital sector emits about the same amount of GHG emissions as the airline industry. So it’s somewhere between 1.5 to 4% of global emissions come from the digital sector. So it’s not a nominal amount. So it is important also, as we talk about resilience today, also to see the sector as also a generator of GHG and how to tackle that issue as well. And then the other piece that we look at is greening with digital. And this is ways in which digital technology can help countries adapt to climate change, develop new tools, early warning systems, et cetera, that can make countries be better prepared to climate events. And then also to use digital technologies to reduce emissions in other key sectors, such as energy, transport, agriculture. These sectors that are very high emitters, how can digital technologies help to reduce those emissions? Next slide, please. One more minute, Seth. One more minute, thank you. Okay. So I’ll go through, actually, if I could go to the next slide. All right. So, and actually, let me, I’ll jump into the country example to Kenya, please. Next slide. This was just identifying some of the issues that telecom infrastructure faces, which are amplified as a result of climate change. So next slide, please. Okay, very good. So in the case of Kenya, so Kenya has quite good telecom coverage, nearly 100% of the population has some sort of connectivity to the mobile network, and it has a decent fiber network as well. It is particularly susceptible to floods and storms and you can see from these pictures, how it was impacted in 2022, as well as in 2023. Next slide. Okay, and then, so what we’ve been doing with the Kenyan government is using GIS to be able to map all of the mobile network sites, particularly the base stations, and to overlay that data using flood prediction, and to be able to then determine which base stations are likely to be impacted or could be impacted by floods, and then to be able to make assessments on what the potential costs could be in order to then make adjustments and to improve the resiliency of that mobile infrastructure. Next slide, please. All right, and so then, not just looking at the country level, but more broadly, this gives you a sense of at different stages of the telecom infrastructure, whether it’s looking at international connectivity and submarine cables, steps that can be taken in order to reduce the potential threat or risk of climate events on this infrastructure. And this is where redundancy comes into particular play. And then next slide, and I’ll wrap up. This just shows you a bit more on the details with the other aspects of the telecom infrastructure, but maybe four quick things to wrap up on that developing countries are particularly vulnerable to climate change and severe weather events. So resiliency is key for telecom infrastructure as a driver for economic and social development. Two, there’s a high return on investment, $1 of resilient infrastructure investments returns $4. There’s generally uneven implementation, urban areas, high populated areas tend to have better resilient infrastructure, but more work needs to be done on rural areas. And in order to do this well, you need to do proper risk assessments and then make sure that redundancy and diversification are part of your plans going forward. Thank you very much.

Ken Katayama:
All right, thank you, Seth. I mean, again, I’ll give you time afterwards also to reiterate some of your points, but I appreciate the global view as well as the explanation on Kenya. So from Kenya, we’d like to go to Japan for our three speakers from Japan, starting with Mr. Otsuka from the Ministry of Internal Affairs and Communications. Otsuka-san, five minutes, please.

Yasuhiro Otsuka:
Thank you, Katayama-san, for allowing my intervention. Good morning, everybody. My name is Yasuhiro Otsuka of Ministry of Internal Affairs and Communications. I’m in charge of policies related to today’s topic of how to deal with increasing risk of natural disasters and keeping our people connected. So now let me start my presentation. Seth-san just explained an increasing risk caused by climate change, and Japan is a country prone to such disasters. Next slide, please. Now let, oh, sorry. Earthquakes and typhoons are typical examples of natural disasters that affect the communication services in Japan. About 20% of earthquakes with magnitude six or greater occur around Japan. Later, Morita-san of NTT will explain the impacts of the Great East Japan Earthquake of 2011. on communication services and the efforts to recover from the damage. The typhoon may be related to presentation of ERIC-SAN data, but typhoons often cause heavy rain, flooding and landslides and have huge impact on communication service in Japan as well. Next slide, please. Here is an example of typhoon, Fakusai, what we call by number in Japanese, number 15 in September 2019. Very strong winds with a maximum instantaneous velocity of more than 50 meters per second caused collapse of power transmission towers and utility poles and triggered large-scale power outages of up to 930,000 households in Tokyo metropolitan area and around. Restoration of power took a long time, and as a result, many mobile base stations ran out of batteries and stopped operation. During the worst period of damage, as shown in the map, more than 2,000 base stations of mobile operators combined stopped operation, mainly in Chiba prefecture, east of Tokyo. Next slide, please. As our daily lives and economic activities depend much more on communication services, the demand for continuous provision of communication services is getting even higher. The MIC, Ministry of Internal Affairs and Communications, is working closely with operators to ensure stable provision of communication services. The role of MIC is to set the frameworks to realise resilient networks, and operators are expected to build and operate resilient networks based on such frameworks. As you can see in the dotted line, let me explain three examples of such frameworks. Next slide, please. The first example is to set technical requirements of networks in the form of regulations or guidelines. In the event of typhoon Faksai in 2019, which I mentioned earlier, prolonged power outages caused many mobile base stations to suspend their operations. In respect, the MIC revised the technical standards and stipulated that major base stations should be able to operate for a longer period. Specifically, as shown in the slide, base stations covering local government offices are required to satisfy operation time of 24 hours or longer. And base stations covering prefectural offices are recommended to satisfy operation time of 72 hours or longer. Based on the standards, now 9,000 mobile base stations satisfy operation time of 24 hours or longer nationwide. And in addition, some 4,000 mobile power supply vehicles and portable generators are deployed nationwide. One minute. And next slide, please. The second example, collaboration of related parties that includes MIC operators as government agencies and local municipalities and other public utility operators are essential to deal with natural disasters. So MIC set up platforms to such collaborations. Collaborations on electricity power, fuel, and cooperation on restoration of obstacles blocking roads are being implemented. For example, with information on the prospect of power restoration, communication operators can manage their mobile power supply vehicles more effectively to avoid suspension of operation. The next slide, please. This is a final slide of my presentation. Studies are being conducted at the MIC to realize inter-carrier roaming in the event of major disasters. It is expected that users of carrier A who have stopped operation will be rescued by the network of another carrier B. Study is under way to realise inter-carrier roaming by the end of 2025. I explained three examples of the role of MIC to set frameworks to realise resilient networks. I give the floor to Morita-san of NTT and Otani-san of KDDI. They will present their activities in the past and at the present to make their network resilient, as well as to recover from the damages promptly. Thank you for your attention. Thank you, Otsuka-san.

Ken Katayama:
Morita-san and Otani-san will be speaking in Japanese, so… So, Morita-san, please give us five minutes of your time.

Masayoshi Morita:
Thank you. My name is Morita, and I am in charge of disaster prevention at NTT. I will be speaking in Japanese. I am in charge of disaster response in the NTT group in Japan. So, I will be speaking in Japanese. I am in charge of disaster response in the NTT group in Japan. I will be speaking in Japanese. As Mr. Otsuka mentioned earlier, Japan is a country that is subject to many disasters. Today, I would like to talk about the situation of the Great East Japan Earthquake, which was the largest earthquake in Japan’s history. As you can see on the X mark, this was in March 2011. 9.0 magnitude earthquake occurred on the 11th of November. The magnitude 9.0 earthquake and the tsunami that followed have caused great damage to Japan. Next slide, please. This is the damage to the communication facilities at the time. Due to the impact of the tsunami, power was cut off, and communication buildings were shut down in 385 locations. In addition, 16 communication buildings were damaged by the tsunami. There were 28,000 power poles that connected the cables. There were 1.5 million power lines that were cut off. This was a major damage. Next slide, please. This is the reconstruction of communication buildings. From the 8th to the 2nd day of the earthquake, power was maintained using batteries and emergency engines. However, it was difficult to procure the fuel needed for supply. Three days after the quake, power was cut off in 385 locations. The NTT group mobilized 10,000 people and friends and recovered all the buildings within 50 days. Next slide, please. Next, I will introduce the efforts to secure the communication of those who were actually affected. We brought satellite communication devices and mobile base stations to the affected areas to secure communication in those areas. Next slide, please. In addition, we provided voice message services to secure communication to confirm the safety of the affected areas. We set up 1.2 million voice calls for free. We also set up free internet connections. Next slide, please. This is about securing the power supply of the communication building. Since the commercial power supply has been shut down, we have installed batteries, self-generating devices, and other mobile power generators to secure the communication. Next slide, please. This is a picture of the recovery of the intermediate line. This is a picture of the recovery of the intermediate line. As you can see in the picture, the cable was washed away by the tsunami along with the line. In order to recover this, we have installed 11 batteries in the vicinity to restore the communication cable. Next slide, please. Now, I would like to introduce the efforts to strengthen our disaster response. Next slide, please. Regarding the communication building itself, we have made efforts to relocate the communication building to the hillside in order to prevent the tsunami or flood. Next slide, please. One minute left. In the same way, we have relocated the communication cable to the inland in order to prevent the tsunami or flood. We have made efforts to relocate the communication cable to the inland in order to prevent the tsunami or flood. Next slide, please. We have also used various drones. We have used various drones, such as large-scale ones, to investigate from above, or small-scale ones, to investigate from above. We have used various tools. The data on the left shows the weather forecast. In the area where the disaster was predicted, we placed personnel in advance, and on the right is the situation of the power of the communication building. We used this as a one-dimensional observation and used it for the estimation of an efficient recovery plan. Next slide, please. This is the last slide, but it is about training for disaster response. We carry out simulation on the ground, disaster response training and joint training with the Self-Defense Force, so that we can recover in the early stages of maintenance. As I have explained so far, we make various efforts to maintain the communication of our customers in the early stages of maintenance, and to recover them in the early stages of maintenance. That’s all from me. Thank you.

Ken Katayama:
Thank you, Mr. Morita. Thank you for your concrete efforts. Next, Mr. Otani, please.

Tomohiro Otani:
In order to prepare for disaster and monitoring, assess network environment, we have prepared for network operation center locally and globally. In terms of local operation in Japan, we have 12 network centers all over Japan. The main center is in Tokyo and Osaka. It’s a dual operation for resiliency. In the case of global operation, we have Tokyo. Asian part and European part are taking advantage of time difference from day to day. And today, we would like to introduce our recovery mechanism for in the case of disaster to pinpoint to handle the situation and let people on site. We are equipped with a very brand new IT gears from the point of ICT sense. And this is the disaster recovery tool which indicates how we monitor current situation and how we assign the people to fix the network failures. And also, we have a dashboard for disaster countermeasures, collecting from various data. And we can easily assign people to on site considering the current situation of the affected environment. And also, we have disaster management systems based on big data as well. We collected a bunch of data from network equipment, traffic, operators, and so forth. And we can easily understand what is going on and what will go on. in the environment. And also, operator on site will utilize smartphone and like iPad types of gear. But in the case of disaster, the communication services also out of services. So before going to the site, they download various information to their devices, and they can still utilize the information even on site where the telecommunication service is no more available. One more minute, please. And also, we have a drone to monitor remotely. We can get the information 2D, 3D, even movies. So we can effectively manage, understand what happened in the areas. And we can send the people to fix the network failure from land, sky, sea, as well as space. Recently, we introduced a starting brand new technology, even in the case of disasters. So this is new information. And lastly, we keep training ourselves thanks to other public sectors and municipalities, locally, even internal. We hope we can provide relief from connection and make you smile. This is the end. Thank you very much.

Ken Katayama:
Thank you so much, Otani-san. Thank you. Let me give the clicker to Eric. So, Eric from Philippines.

Roderic S. Santiago:
Thank you. All right. Konnichiwa, everyone. My name is Eric Santiago from PLDT Smart. So before I start, I just wanted to ask if you’re familiar about the Philippines having 7,107 islands, and that depends if it’s high tide or low tide. So aside from that, as you know, we are also a typhoon-prone country. There’s approximately 20 typhoons entering our area of responsibility every year. So by just saying that, it is crucial, it is critical for us to have resiliency embedded in the design of our network. And because of that, I just wanted to share one of this award. So Smart is the wireless arm of PLDT. It’s the – PLDT is the Philippines’ leading integrated telco company. So this was in MWC where we were awarded by UCLOS, the Philippines’ fastest and best mobile network. Having best mobile network includes not only the coverage but especially the resiliency of the network during the time of calamity. We are doing numerous things to optimize it with energy-efficient solutions to further enhance customer experience and also to promote sustainability. So we have deployed a lot of solar-powered sites to be able to reduce power consumption and also help the environment. We have accelerated our rollout not only on the macro sites or outdoor sites but also on in-building. to further enhance our services. But one thing to note is really the disaster resilience. We have been supporting the United Nations Office Humanitarian Affairs mandate on how to support subscribers, especially during time of disasters. By doing so, we have promoted some of these products, particularly the emergency cell broadcast system. This is the first one that we deployed first in the Philippines to ensure that we will provide promptly alerts to all the subscribers prior to any disasters. We have been providing some of the smart satellite, just like my colleagues here. We also provide resiliency through satellite backhaul during time of calamity, especially when the terrestrial sites are damaged, are not functional. We have provided a lot of text broadcasting and a lot of support during time of calamities. In that way, it is important for us that during and after the time of the disaster, we will be providing some hotlines, smart phones, SIM cards, and all of those free communication access to our subscribers. We have recently deployed this and it became very popular. It’s a one-stop emergency comms kit that will include a solar panel, smartphone, satellite phone, a wifi, a megaphone, a wheel cell, a flashlight, and a lot of emergency comms training. This one here really saved a lot of lives. Aside from that, maging laging handa is always, it’s a Tagalog word for. for always being prepared. So we have caravan teaching around the nation on how to be able to be prepared in times of calamities and what’s the first thing to do. As I mentioned, I just wanted to reiterate, building a resilient network is embedded in our design. We have transport network not only within the Philippines but also submarine cables that we have resiliency. So in time of piper cuts that we will have other routes to be able to continuously provide connectivity. We are expanding them to almost 1.1 kilometers right now of a piper network. And our emergency operations center is always ready to be able to provide services during time of calamity. The last one that we got hit by the super typhoon Rai in December, 2021, it was around Christmas time where we have deployed a lot of our supports by air, by land, and by sea, just to be able to provide connection during that time. With that said, I would like to highlight this one. This is an outstanding award from our national response and cluster highlighting that VLDT Smart has truly been a partner during time of disaster by utilizing a resilient network that no Filipino will be left behind during this time. And with that said, I also would like to show you this video to summarize everything that I said. Thank you. Thank you. ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ Thank you so much for the opportunity to present and share with you some of the initiatives we’re doing to deliver the resilient network to our fellow Filipinos in the Philippines. Thank you.

Ken Katayama:
Thank you so much, Eric. Very nice video as well, too. I appreciate that. Thank you. I think we have Dr. Kornazeski online. Hello? Hello. Yes, hello. Thank you. Welcome from Australia. I believe we were not able to do our pre-briefing today, Dr. Kornazeski. I was asking the speakers, Seth from the World Bank and Eric from the Philippines and yourself, for eight minutes each and then to provide an opportunity. for the audience and also some of the speakers to re-comment as well. So if I could ask you kindly to wrap up by about 12.48, our time, 12.49. I think if you’re in Australia, I guess that’s about 2.48. So I give the floor to you, Dr. Konowiecki. Thank you.

Tara Konarzewki:
Great. OK, great. Thank you very much. Thank you. There we go. Is that working? Can everyone see my screen? Yes, we can see your screen. Great. Thank you very much. Thank you for having me here at the forum today. All right. In the interest of time, I’ll get started. As in many parts of the world, Australia has experienced its fair share of extreme weather events in recent years. And the Australian Bureau of Meteorology has noted in recent reports that this warming is likely to continue. The effects of this warming in particular were demonstrated during the Australia 2019-2020 summer when large parts of the country were severely affected by bushfires. These bushfires resulted in tens of billions of hectares of land being destroyed, destruction of thousands of properties. Tragically, 33 lives were lost as well, including the death or displacement of an estimated 3 billion animals. Since then, a succession of La Niña weather events has caused significant flooding, impacting many communities across Australia over 2022. And the flooding has affected the everyday lives of many Australians, with many parts of the country, such as Sydney, experiencing its wettest year on record. I’ll just quickly go through these slides. So obviously, everyone in the room is aware of the impact of telecommunications from these disasters, such as fire. and floods, and I’m sure everyone in the room is also well aware of the impacts on communities, so in the interest of time, I’ll just pass through. Before I cover the key actions that the Australian government is taking to improve the resilience of telecommunications against disasters, I thought I’d just provide a general overview as to what the government’s role is when it comes to telecommunications disaster resiliency. So under Australia’s federal government structure, the Australian government is responsible for this includes responsibility for managing policy and regulatory settings for the sector, as well as providing grant funding to encourage certain activities, such as expanding mobile coverage in regional and remote areas. However, in Australia, it’s our state and territory governments, of which there are eight, that are primarily responsible for responding to disasters. Australia’s telecommunications carriers are likewise responsible for the direct operation and maintenance of their networks. This means that when a disaster occurs, telecommunications companies will typically work directly with the relevant state or territory government in accordance with the emergency management arrangements within that jurisdiction. The Australian government’s main role in this context is therefore to help prepare the sector to respond and assist with recovery from disasters. Now, this is just a general overview. And in practice, the state and territory governments will often work with the sector to help prepare them for disasters, such as by involving them in emergency planning. Likewise, the Australian government more broadly provides assistance to telecommunications companies on occasion when it’s necessary. So for example, the image up on the screen there is during a severe flooding event that impacted the northwest coast in January 2023, where floodwaters destroyed a major arterial bridge, which contained fibre optic cables. This caused major outages. And in response, assistance was provided by the Australian government in the form of a military aircraft to be able to get those technicians across the bridge. So in terms of what the Australian government has been doing to help prepare the sector for disasters, there are a range of actions that have been taken, recognising the serious impact of the 2019-2020 bushfires on Australia’s telecommunications network. The government has been implementing resiliency improvement initiatives through four core measures, which I’ll go through now. The first is the Mobile Network Hardening Programme Round 2, which is delivering around 1,000 mobile network resiliency upgrade projects across regional and remote Australia. So Stage 1 provided $13.2 million to upgrade battery backup power to a minimum of 12 hours at 467 base stations. Stage 2 provided $10.9 million for 536 resiliency upgrades. Over 461 of these upgrades have been completed so far, and they’ve included the installation of permanent power generators, increased battery reserves, transmission resiliency upgrades to protect against outages, and site hardening measures such as protective ember screening to protect sites from the potential impact of embers, radiation or flames. The second element is the SkyMaster Satellite Deployment Programme. So this programme has installed fixed satellite internet connections at over 1,000 evacuation centres and emergency service depots across Australia. This provides free backup connectivity via satellite. While many of these facilities already had fixed line connections, this way we can keep our emergency personnel connected and focused on the emergency response. The third element is the Temporary Infrastructure Deployment Programme, which is expanding the availability of portable assets such as cells on wheels and portable satellite kits which provide temporary coverage following a disaster. The final element is our communications program, which has been involved in developing communications material and other resources for stakeholders to use in an effort to improve general community awareness and preparedness for outages during disasters. All of these projects have had a real impact in improving the availability of telecommunications during natural disasters to date, and simple messages can help communities and tourists prepare and know how to get information and get help such as our radio broadcasting services. For example, during the March 2022 floods, temporary facilities were able to be deployed to evacuation centres in flood affected areas across the state of New South Wales, which provided critical connectivity for evacuated residents in their time of need. Another example of this was during major flooding in the state of Victoria when the communities of Bem River and Marlowe were isolated both geographically and in their connectivity, with both communities being able to access the internet through satellite services which were installed in the months prior. So while these examples have made a material difference, it is clear from more recent disasters that the threat posed is ongoing and that more needs to be done to improve the readiness of Australia’s telecommunications infrastructure. In acknowledgement of this, the Australian Government recently announced the Better Connectivity Plan for Regional and Rural Australia last year. The plan forms part of the Australian Government’s telecommunications agenda and is providing more than $1.1 billion to rural and regional communities in Australia. The plan includes $656 million over five years to improve mobile broadband connectivity and resilience in rural and regional Australia. As part of this, the Better Connectivity Plan includes $100 million in funding for additional measures aimed at further strengthening resilience against natural disasters. One more minute, please. Thank you. So of that $100 million, there’s two programs that are included in that. So the first one here is our Mobile Network Hardening Program, round two. And our second one is the Telecommunications Disaster Resilience Innovation Program. And if anyone would like any more information on either of those two initiatives, my contact details are up there on the screen. Thank you very much for your time today. I really appreciate it. Thank you.

Ken Katayama:
Well, you had 30 more seconds, I think. Dr. Skipper, I appreciate you wrapping up. Thank you so much. Well, I appreciate all of my speakers keeping to time. As promised, I have 10 minutes for questions. Also, maybe some for some follow-up. I do recognize, since Tara and Seth are online, in the audience, I have some colleagues. Ms. Sugimoto, she’s from the National Institute for Communications and Technology, as well as Dr. Komiyama. He’s from JPCert. They’ll probably have good questions and comments. But before I point out them, Seth, do you have something that you wanted to add that you weren’t able to cover in your presentation?

Seth Ayers:
I’m good. I think I was able to cover everything as needed and look forward to the questions. Great, that’s fantastic. Thank you.

Ken Katayama:
Thank you. Mr. Otsuka, did you have anything you wanted to add? Thank you.

Masayoshi Morita:
I’ll speak in Japanese. I’ll wait for your question. Thank you. Mr. Morita, do you have anything you wanted to add? I’m good, too. Thank you for this precious opportunity.

Ken Katayama:
Eric, anything else you wanted to add before we go? I’m sorry, Tara, was there anything you wanted to add, because you still have 30 seconds left? No?

Tara Konarzewki:
No further comments from me, thank you.

Ken Katayama:
Okay, sure. Great, thank you. Sasaki Motsumura, did you have a comment or question? Well, thank you for the… And you should say where you’re from, I didn’t explain NICT, I’m sorry.

Audience:
Thank you for the impressive presentations. I’m from NICT, I work for Workforce Development regarding cyber security, and we always use analogy with disaster prevention and disaster control when emphasising the importance of cyber security and incident handling. So I have a question for Tara-san about Australian measure for disaster control, do you have any training or exercise mechanism with operators for the disaster control? Because I always feel it’s difficult to raise awareness and prepare beforehand when nothing is happening, so thank you.

Tara Konarzewki:
Thank you for the question. Yes, unfortunately, the way that the government is structured, with the federal government having a very separate role to our states and territories, the way that we engage with our telecommunications industry is probably a little bit different to some other countries around the world, but we do engage on a case-by-case basis when events do happen, obviously there is engagement between the governments or jurisdictions in Australia that are affected and the telecommunications industry, but it’s my understanding that there is some planning that does go into certain events that we can predict, obviously, Australia as a nation, we’ve got a lot of work to do do suffer from bushfires and floods, and they do happen at particular times of year, as experienced by many countries in the world. But if you would like to send me an email, I might be able to follow up with a few more specific projects. Thank you.

Audience:
Thank you so much.

Ken Katayama:
Thank you, Tara. Thank you so much for the question. Okay, sure. Do you have a microphone, if you could identify yourself and let us know who you are.

Audience:
I thank you all for your talks and your presentations. My name is Jarell James, and I represent Koala and Internet Alliance here at the IGF. I just have two quick questions, and if you’ll take them. One is for Seth. You had briefly mentioned that $1 of investment would do $4 in return, but it was a little quick and unclear as to kind of the metrics behind that. And then my next question is actually for the kind gentleman from the Philippines. If you could just speak to how you educated the population on the importance of communication, the resiliency, these would be very valuable. Thank you.

Ken Katayama:
Thank you. Seth, would you like to take the first question? Sure.

Seth Ayers:
In regards to that data point comes from a report that the World Bank published. I believe it was 2019. It’s called the Lifeline Report. I don’t think I can put in the chat, but I can send, but if you were to search for Lifeline World Bank report, it was on all infrastructure, basically looking at a variety of critical infrastructure and then doing evaluations. There’s also a number of really interesting country case studies on different types of infrastructure in addition to telecom specifically. But the one in four is in reference to overall return on investment for resilient infrastructure. Yeah, so it’s often referred to as the Lifeline Report. So if you put in Lifeline reports, World Bank, it should pop up in your search engine. Is just a quick follow-up, is this the similar data that’s being used by folks like NetBlocks to calculate particular regional shutdowns or internet blackouts and how they would affect the economic situation on the ground? I don’t know if you know NetBlocks, but netblocks.org, they’re using a lot of open source data, but I didn’t know if some of it was coming from the World Bank. Yeah, so it’s a very good question. I’m not sure of those details. The map that I showed for Kenya is based on open source data. So we do do a lot of work in country with open source applications and data in order to do some of the evaluations. I’m not sure specifically on that piece of it, but we do publish most of our data is available online at data.worldbank.org.

Ken Katayama:
Thank you. Eric?

Roderic S. Santiago:
All right, well, thanks for the question. So educating our constituent is a continuous journey, but let me divide it into three segments, right? Number one, online learning. So we are developing short videos to clearly showcase how and what to do during time of disaster. During time of disaster. And we distribute that one through our channels online, our websites, and via text messages. Second item is really, we use caravans to go to specific areas to do face-to-face learning, right? Because some people are not that tech savvy that you need to really show it to them and demonstrate, and that’s very powerful. And the third one is really informing the youth of today, in our country. to ensure, to encourage, and teach their grandmas, grandparents, and parents in their family. In that way, it will be a continuous learning to everybody. Thank you.

Ken Katayama:
So, are you satisfied with the two answers? Is that okay? Okay, well, all right. If you do, well, I work for a company which is important, called Just-In-Time. So, there’s two more minutes left. And so, if there’s no… So, if I could ask you to keep your question within two minutes. So, please, I’ll give you the microphone, if that’s okay to you. Is there a microphone over there? Please, you can pass the mic. So, if you could quickly identify yourself.

Audience:
No, no, I’m from Kazakhstan. Hello, my name is Arman Andrasilov. And I just want a question from the Japanese group. I’d like to know the current situation, I mean, in common situation, the percent of coverage of Japan and average speed of internet for common time. Because your presentation was about emergency time, but I don’t know about common time. Have any problems in common time?

Ken Katayama:
Coverage and internet speed.

Tomohiro Otani:
Hello, thank you very much for your questions. And we are not sure the exact number of… the percentage of coverage and the average speed of internet access, since there are differences between mobile and fixed services. And also even between 4G and 5G, there are service quality differences. Currently Japanese operators are eager to construct 5G network nationwide. I believe recently more than 90% availability in terms of 5G, but please visit MIC website to find exact numbers. In terms of 4G, we believe that 1990 point something, but the number is really fluctuating depending on the time and the day and the year, so please confirm the website.

Audience:
Okay, thank you. So you have no problem with resilience in the comment time?

Ken Katayama:
So maybe if we can use that question after the session. I’m going to stop now. Thank you. So thank you for answering. It’s 1 o’clock, as promised, we are finishing on time. Seth and Tara, thank you very much for participating from overseas. Thank you. And thank you, everybody. So if we can give all of us a hand of applause for participation. Thank you. Thank you.

Audience

Speech speed

160 words per minute

Speech length

315 words

Speech time

118 secs

Ken Katayama

Speech speed

161 words per minute

Speech length

1213 words

Speech time

453 secs

Masayoshi Morita

Speech speed

136 words per minute

Speech length

833 words

Speech time

368 secs

Roderic S. Santiago

Speech speed

127 words per minute

Speech length

987 words

Speech time

466 secs

Seth Ayers

Speech speed

159 words per minute

Speech length

1894 words

Speech time

717 secs

Tara Konarzewki

Speech speed

165 words per minute

Speech length

1485 words

Speech time

540 secs

Tomohiro Otani

Speech speed

95 words per minute

Speech length

545 words

Speech time

346 secs

Yasuhiro Otsuka

Speech speed

138 words per minute

Speech length

800 words

Speech time

347 secs

Leave No One Behind: The Importance of Data in Development | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Samuel Nartey George

In a series of discussions, the importance of including all communities, including rural areas, in data governance was emphasized. It was noted that decisions should be based on data from diverse communities, but there is often a discrepancy in the data collected from urban and rural areas due to differences in connectivity and affordability. To address this, it was suggested that data governance should prioritize inclusion to ensure fair decision-making.

Another topic discussed was the need for affordable internet access and devices to promote comprehensive digital footprints. It was highlighted that underprivileged communities face barriers, such as expensive smartphones, which prevent them from fully participating in the digital world. To overcome this, the idea of creating cheaper “generic” technology, similar to generic pharmaceutical drugs, was proposed. This would make internet access and devices more affordable, enabling a more inclusive digital footprint.

The concept of affordable, generic devices was further explored, suggesting manufacturing cheaper devices on the African continent itself. Drawing inspiration from the pharmaceutical industry, the goal is to make technology accessible to all and bridge the digital divide, particularly in underprivileged communities.

Additionally, the potential transformative impact of connecting unconnected communities was discussed. Access to online educational materials was seen as a way to provide young people in these areas with employable skills, benefiting their economic prospects. Internet connectivity was also seen as crucial in establishing local businesses and livelihoods. Therefore, prioritizing internet connectivity in rural areas was deemed essential to unlock economic opportunities and educational advancements.

The significance of education, particularly digital skills, was emphasized. It was recommended to prioritize digital skills development to enable individuals to thrive in the digital era. One suggestion was to allocate a portion of the constituency development fund for acquiring digital skills, ensuring that individuals are equipped for the digital age.

Partnerships with the private sector and civil society were seen as essential in achieving the goals discussed. These partnerships would facilitate the transfer of necessary skill sets and support the implementation of initiatives aimed at promoting inclusion, connectivity, and digital skills development.

During the discussions, it was noted that Africa is being exploited not only for its natural resources but also for its data, largely due to a lack of understanding among leaders about the economics of data. It was emphasized that African countries need to prioritize and regulate their data usage to protect their interests.

Implementation checks of cybersecurity legislation and data protection laws were also highlighted. It was observed that while some countries have these laws, proper enforcement is lacking. It is necessary to have rigorous implementation checks to ensure effective cybersecurity and data protection measures.

Overall, the discussions emphasized the importance of inclusion in data governance, affordable internet access and devices, partnerships, education, and regulation of data usage. Addressing these issues can promote digital inclusion and protect data in Africa, leading to sustainable development and benefiting individuals and society as a whole.

Lee Mcknight

Data rights, privacy, and security are vital components that should be integrated into the governance framework of any community, village, or city. It is essential that citizens’ data rights are determined by the people living in the community, ensuring that their data is not harvested automatically without consent by external entities.

To protect citizens’ data rights, collaboration with the Africa Open Data and Internet Research Foundation has been established. This collaboration aims to bring connectivity to communities, with a primary focus on safeguarding citizens’ data from being harvested without consent. By working together, they are ensuring that individuals have control over their own data and that it is not exploited for external purposes.

In addition, community networks play a significant role in providing connectivity to the unconnected, enabling them to be included and accounted for in data. These networks have been advocated for by the Internet Society and have shown success in various cases. For example, in a previously disconnected community in Chile, the mayor states that thanks to a community network, her community now exists in the data pool. This demonstrates the positive impact of community networks in bridging the digital divide and ensuring that everyone has access to connectivity.

Moreover, advancements in technology have provided new opportunities for community networks. Today, these networks can incorporate energy solutions, such as portable microgrid solar-powered units. This innovation allows for longer connectivity durations without the need for additional infrastructure. A small portable microgrid solar-powered unit developed at Syracuse University has been deployed in over 20 countries, particularly in Ghana and the Democratic Republic of Congo. This infrastructure-less network not only provides connectivity but also addresses the issue of limited access to affordable and clean energy in many communities.

In conclusion, embedding data rights, privacy, and security into the governance framework of communities is crucial. Citizens’ data rights should be determined by the community members themselves, protecting their data from being harvested without consent. Collaboration with organizations like the Africa Open Data and Internet Research Foundation plays a vital role in achieving this goal. Additionally, community networks offer a solution to bridge the digital divide, ensuring that the unconnected are included and accounted for in data. By incorporating energy solutions, community networks can provide longer connectivity durations without the need for extensive infrastructure. These efforts collectively contribute to creating a more inclusive and secure digital environment for all.

Audience

The importance of education and skill acquisition in digital fields for African nations is emphasized in the analysis. It highlights Ghana’s ‘Girls in ICT’ program as an example of efforts to impart digital skills to girls in secondary schools. This program recognizes the significance of providing education and training in digital technology to equip the future workforce.

Furthermore, the analysis suggests that Africa should leverage its data assets and burgeoning internet growth, rather than giving them up indiscriminately for development aid. With the projected boom in internet users in Africa, there is an opportunity for the continent to harness its data resources and drive economic growth. By utilizing data and investing in digital infrastructure, Africa can create economic opportunities and bridge the digital divide.

However, concerns are raised about the excessive collection of data in Africa without appropriate data protection laws. The lack of a human-rights-based approach in data protection laws in most African countries raises potential implications for the future. The analysis points out that accountability for data breaches is often lacking, indicating a need for stronger data protection measures.

Additionally, current data protection laws in Africa often lack necessary elements such as accountability, equality, empowerment, and legality. It is highlighted that some countries enact data protection laws as a formality, rather than out of real necessity. This undermines the effectiveness of these laws and leaves individuals vulnerable to privacy and data breaches.

The issue of sensitive data being stored abroad due to the lack of local storage infrastructure is also raised. For instance, in Togo, electorate biometric data is stored with a private company in Belgium, and the contracts for such data storage are not typically accessible for scrutiny. This lack of local storage infrastructure poses risks in terms of data security, sovereignty, and control.

To address these concerns, the analysis suggests that Africa needs to build the capability to implement effective data protection laws. Despite having data protection laws, some countries, like Togo, lack an agency to effectively implement them. It is highlighted that a regional data registry is being constructed in West Africa with funding from the World Bank. This initiative aims to enhance governance and strengthen the implementation of data protection laws.

In conclusion, the analysis emphasizes the importance of education and skill acquisition in digital fields for African countries. It also highlights the opportunities for Africa to leverage its data assets and burgeoning internet growth for economic development. However, there are concerns regarding excessive data collection without appropriate protection, the lack of accountability in current data protection laws, and the need for local storage infrastructure. The analysis underscores the necessity of building the capability to implement data protection laws and advocates for a cautious approach, highlighting the importance of robust, human-rights-based data protection laws.

Victor Ohuruogu

The UN Foundation’s Global Partnership for Sustainable Development Data is focused on enhancing the availability, accessibility, and utilization of high-quality data for decision-making. Their efforts are geared towards improving the timeliness of data, fostering inclusivity of marginalized groups in the data value chain, and promoting accountable data governance. With over 600 participants from state and non-state actors across 35 countries, this global network is committed to advancing the cause of data-driven policy-making, bolstering SDG 17 – Partnerships for the Goals.

In Africa, there is a pressing need for data literacy and capacity building. The region faces significant challenges in terms of understanding data from both political and technical perspectives. To address this, the Global Partnership conducts programs aimed at enhancing comprehension of various data types and their usage. By empowering individuals with the necessary skills and knowledge, they aim to bridge the capacity gap and facilitate the effective utilization of data in Africa. This aligns with SDG 4 – Quality Education and SDG 17 – Partnerships for the Goals.

Although data holds tremendous potential for informing political decisions, it often lacks prominence in the political space. Many politicians do not fully consider data while making decisions, which can hinder evidence-based policy-making. By elevating the political profile of data, the Global Partnership seeks to strengthen the connection between the private sector and government. This collaboration can contribute to more robust and informed decision-making processes, aligning with SDG 16 – Peace, Justice and Strong Institutions and SDG 17 – Partnerships for the Goals.

With crises like COVID-19 further highlighting the importance of data-driven decision-making, the effective application of data becomes crucial in the humanitarian sector. The Global Partnership recognizes this significance and actively collaborates with humanitarian organizations and Presidential task forces to identify gaps in infrastructure, including computing infrastructure. By strengthening capacity in utilizing both infrastructure and data, policy and decision-making in the humanitarian sector can be considerably enhanced. This effort supports SDG 9 – Industry, Innovation, and Infrastructure and SDG 17 – Partnerships for the Goals.

Moreover, the proper management and implementation of data sovereignty issues are emphasized. Individuals whose data is being collected should have a say in how it is used, while considering the principles of data governance. The development of data governance skills within public sector institutions is crucial for ensuring that data sovereignty is respected and protected. These initiatives align with SDG 16 – Peace, Justice and Strong Institutions.

In conclusion, the UN Foundation’s Global Partnership for Sustainable Development Data is actively working to improve the availability, accessibility, and use of quality data for decision-making. Their efforts include initiatives such as enhancing data literacy, advocating for the political prominence of data, and strengthening data utilization in the humanitarian sector. By addressing capacity gaps, promoting accountable data governance, and engaging both the public and private sectors, the Global Partnership contributes to achieving the Sustainable Development Goals.

Kwaku Antwi

The speakers emphasized the significant impact of data as a crucial driver of economies, often referred to as the “new oil”. They highlighted how data has become the focus of global conversations and has the potential to revolutionize industries and drive innovation. Open data was also discussed, emphasizing the importance of making information easily accessible on various platforms. This allows for the sharing of valuable information across sectors and encourages collaboration and innovation. However, it was acknowledged that the digital divide poses a challenge to accessing data due to limited internet connectivity in some communities. Bridging this divide was emphasized to ensure equal opportunities for all. The speakers also stressed the importance of empowering communities with skills to effectively utilize data and set up networks. Open data and internet connectivity were seen as transformative forces in education, healthcare, agriculture, and other sectors. The conclusion highlighted the need to recognize and enhance Africa’s capacities in internet connectivity to drive transformation through the exchange of open data. Overall, the discussions underscored the crucial role of data and the potential of open data and internet connectivity to contribute to Africa’s inclusive growth.

Dr. Smith

In Africa, the implementation of data initiatives plays a significant role in accelerating progress towards achieving the sustainable development goals on the continent. These initiatives have the potential to address key challenges and support sustainable development in Africa, which faces a unique set of challenges and opportunities. By leveraging technologies and data, Africa can address issues such as poverty, inequality, and environmental sustainability.

One of the main arguments is the importance of implementing data initiatives in Africa. These initiatives can help African countries overcome various obstacles, including limited access to resources and infrastructure. By harnessing the power of data, governments and organizations can make informed decisions and develop evidence-based actions to address pressing issues. This can lead to improved service delivery, better governance, and enhanced economic growth.

It is crucial to address challenges such as data privacy, cybersecurity, and infrastructure development to ensure that these technologies benefit all segments of society, including the most vulnerable. Data privacy and cybersecurity are essential to protect sensitive information and maintain trust in digital systems. Additionally, investing in infrastructure development is necessary to ensure reliable connectivity and access to digital technologies across the continent.

The collaborative efforts between government, private and public sectors, and civil society organizations are vital for the successful implementation of data initiatives in Africa. Governments, along with the private and public sectors, must work together to create supportive systems and policies that enable the effective use of data technologies. Civil society organizations also play a crucial role in advocating for transparency, accountability, and inclusive decision-making processes.

By effectively using technologies, African governments can lessen existing challenges and continue to create more sustainable, inclusive, just, and prosperous futures for their citizens. Embracing innovative technologies can help bridge the digital divide, promote inclusivity, and empower marginalized communities. This, in turn, can lead to reduced inequalities, increased access to quality education, and stronger institutions.

The idea of Pan-Africanism, which recognizes our shared humanity and the importance of unity among African countries, is another noteworthy argument. Furthermore, the idea of a United States of Africa, which has been discussed since the Organisation of African Unity (OAU) days, is not as futuristic as it may seem. Both concepts highlight the importance of regional integration, cooperation, and solidarity among African nations.

However, achieving these goals requires grassroots mobilization and the active involvement of citizens. Leveraging technologies can help move this social movement forward by facilitating communication, organizing campaigns, and raising awareness. The united efforts of individuals, communities, and organizations are crucial in realizing the vision of a global Africa or a United States of Africa.

In conclusion, the implementation of data initiatives in Africa is essential for achieving sustainable development goals. It is vital to address challenges such as data privacy, cybersecurity, and infrastructure development to ensure that these technologies benefit everyone. Collaborative efforts between government, private and public sectors, and civil society organizations are crucial for creating supportive systems. By effectively using technologies, African countries can create sustainable, inclusive, just, and prosperous futures. The concepts of Pan-Africanism and a United States of Africa are not far-fetched, and grassroots mobilization is needed to achieve these goals.

Usman Alam

The Science for Africa Foundation, a pan-African organization that funds research and innovation across the continent, emphasized the crucial role of locally generated, governed, and diverse data for driving impact in Africa. They highlighted the need for diversity, equity, and inclusion in data, especially in the African context and with regard to women. The Foundation also highlighted the challenge of limited access to data, even at high governance levels, due to data being housed in specific ICT ministries. This indicates a need for greater collaboration and coordination in data governance.

Advocacy for equitable partnerships and the prevention of governance in silos was another key point raised. Usman Alam, in his advocacy work, underlined the importance of fostering partnerships that are fair and inclusive. He emphasized the significance of locally generated data that reflects the diverse facets of the demographic, as this ensures a comprehensive representation of the population. Alam cautioned against the risk of governing in silos, as it can hinder access to data, even at high government levels. This highlights the importance of breaking down silos and establishing collaborative frameworks for data governance.

Connectivity was also discussed as a transformative factor in driving research and innovation within the African context. The availability of connectivity can change how research and innovation are conducted and has the potential to unleash the full potential of individuals and communities. The concept of a community of practice was suggested as a means to foster new funding and implementation approaches, facilitating greater connectivity and collaboration in research and innovation endeavors.

Promoting equity through the hub and spoke model of funding was presented as a promising strategy. This model is based on partnering with other stakeholders to provide equal opportunities for all. It offers the potential to empower women’s leadership and strengthen the connection between government, researchers, and data. By fostering collaboration and sharing resources, the hub and spoke model can contribute to reducing inequalities and promoting equitable development.

Trust issues relating to the handling and sharing of personal data were recognized as a concern, particularly within the academic and expert community. This indicates the need for robust data governance frameworks and mechanisms to address these trust issues. Building trust is crucial for ensuring the effective and responsible use of personal data, thereby strengthening institutions and promoting peace and justice.

Lastly, the importance of harnessing endogenous knowledge for sustainability was highlighted. The successful response to the Ebola outbreak in Sierra Leone, Liberia, and Guinea underscored the value of utilizing local knowledge and expertise. Leveraging endogenous knowledge in the continent’s healthcare management can lead to more effective and culturally appropriate solutions. This highlights the significance of recognizing and leveraging local expertise and knowledge for sustainable development.

In conclusion, this analysis emphasizes the critical importance of locally generated, governed, and diverse data in Africa. It highlights the need for diversity, equity, and inclusion in data, the challenges of limited access to data, the value of equitable partnerships and the prevention of governance in silos, the transformative potential of connectivity, the role of the hub and spoke model in promoting equity, the trust issues surrounding personal data, and the value of harnessing endogenous knowledge for sustainability. By addressing these challenges and leveraging these opportunities, Africa can harness data and knowledge to drive positive impact and sustainable development.

Moderator – Yusuf Abdul-Qadir

The discussion highlighted several key points regarding the use of data and technology to enhance connectivity and drive development. Moderator Yusuf Abdul-Qadir emphasized splitting the conversation into two key components. The first component involves addressing gaps in data use and strengthening data ecosystems. This entails identifying and bridging any existing gaps in data usage, encouraging the effective use of data, and enhancing the overall data ecosystem. The second component focuses on leveraging technology and community networks to ensure universal connectivity. This involves leveraging technological advancements and community networks to provide connectivity to even the most remote and disconnected areas.

Inclusivity in accessing and leveraging data was also underscored as a crucial aspect. Ensuring that everyone is included and that no one is left behind in discussions on data access and usage is of utmost importance. However, specific strategies or approaches for achieving this inclusivity were not provided.

Community networks were praised for their ability to bring connectivity to previously disconnected areas. These networks are created by people to cater to the specific connectivity needs of their local communities. The Internet Society has been a strong advocate for community networks. An example of their effectiveness was highlighted by a formerly disconnected community in Chile that established a community network during the pandemic.

Furthermore, the integration of connectivity solutions with sustainable energy sources was deemed effective in enhancing the impact and efficiency of community networks. Syracuse University, in collaboration with the Worldwide Innovation Technology Entrepreneurship Club, has developed connectivity solutions that are packaged with portable, microgrid solar power sources. These solutions have been successfully deployed in over 20 countries and are currently being used in Ghana to connect school children in libraries.

The discussion also recognized that access to the internet and data has the potential to unlock people’s fullest potentials and affirm their existence. Data and internet access play a crucial role in acknowledging the interconnected nature of communities and fulfilling mutual obligations. This perspective aligns with the concept of Ubuntu, which advocates for interconnected existence.

Yusuf Abdul-Qadir supported the idea of using open data and community networks to facilitate the United Nations Sustainable Development Goals and unlock human potential. He believes that technology and data can unite the continent and drive development, supporting the notion of a United States of Africa as a way to foster a connected and inclusive continent.

The transformative power of internet connectivity and open data was acknowledged in various sectors such as education, healthcare, and agriculture. Internet connectivity allows for the sharing of information in an open environment, enabling advancements in these sectors. The availability of cloud infrastructure and access across diverse sectors was seen as essential for enhancing capacities and ensuring digital inclusion in the African context.

Additionally, the discussion emphasized the importance of gender equality and good health and well-being. Maximizing human potential requires advocating for gender equality and prioritizing good health and well-being. Connectivity has the potential to significantly impact these sectors, leading to positive outcomes for overall development.

In conclusion, the discussion provided valuable insights into the importance of data use, technology, and connectivity in driving development and achieving the United Nations Sustainable Development Goals. The need for inclusive access to data and leveraging community networks was emphasized. Moreover, the integration of sustainable energy sources with connectivity solutions was seen as effective. Internet connectivity and open data were recognized for their transformative power, while the importance of gender equality and good health and well-being was highlighted. Overall, the discussion underscored the immense potential of harnessing data, technology, and connectivity to unlock human potential and foster a connected and inclusive society.

Session transcript

Moderator – Yusuf Abdul-Qadir:
Mic check. Okay. Welcome. From Kyoto, Japan. For our discussion. Entitled leave no one behind the importance of data in development. If you’re here, you’re in for a treat. We have some dynamic panelists here with us in person. And in line with the theme of today’s discussion of not leaving anyone behind, we have those who will be joining us virtually. Before we get into it, and before I kind of get into a long monologue here, I want to invite my esteemed colleague and friend, Dr. Danielle Smith of Syracuse University, to open us with some opening remarks. Dr. Smith.

Dr. Smith:
Thank you, Yousef. Greetings to the session’s organizers, the presenters, the audience here in person and virtually, and to all those attending the UNIGF in Kyoto. I am truly honored to welcome you to this session. And I would also like to thank the people of Japan for your very warm hospitality. I’m very thankful for the leadership of Wissam Donkor and Kwaku Antwi at Africa Open Data and Internet Research Foundation, who are joining us virtually. Their tremendous support in planning this session has been instrumental. As we know, there are many ongoing data initiatives around the world. Implementing data initiatives in Africa can play a significant role in accelerating progress towards achieving the sustainable development goals on the continent. Africa faces a unique set of challenges and opportunities. And leveraging these technologies can help address key issues and support sustainable development. However, it is also important to address challenges such as data privacy, cybersecurity, infrastructure development, and ensuring that these technologies benefit all segments of society, including those who are the most vulnerable. In addition, governments, the private and public sectors, and civil society organizations must work together to. create supportive systems for the implementation of these diverse initiatives. By effectively using such technologies, African governments can lessen existing challenges and continue to create more sustainable, inclusive, just, and prosperous futures for their citizens. The session presenters are experts in this area and can help us understand these initiatives and broader global trends. It is particularly important to learn about developments on the ground and from experts who are in the field. Thank you again for joining us, and we look forward to an informative session. Thank you.

Moderator – Yusuf Abdul-Qadir:
Thank you, Dr. Smith. As I said, we’re going to get right into this discussion. And for those joining us in person and virtually, we’ve decided to split this conversation into two main components. The first component is addressing gaps, encouraging data use, and encouraging strengthening the data ecosystems. The first set of conversations will be situated in that piece here. And then the second component is leveraging technology and community networks to make sure that everyone gets connected. It’s essential that we don’t just theoretically have a conversation about ensuring access to data, leveraging data, but making sure that everyone is included and that no one is left behind. As I said, we have an amazing set of panelists here with us in person and online, and we’re going to get right to it. So to kind of begin, I want to start with you. Let’s see, Victor Ohuguru. Forgive me and correct me in your presentation for me not pronouncing your name correctly, who’s a senior Africa regional manager at the UN Foundation for Global Partnership for Sustainable Development Data. I want to begin with you. If you can just please give us a minute or two of opening remarks and let us hear how you’re doing this work at the UN. We don’t want to leave you behind, so let’s look. Well, while we get Victor, let’s go to Kwaku. Kwaku Antwi is a leader with this collective here from the African Open Data and Internet Research Foundation. Kwaku has been, as Dr. Smith mentioned, an important leader in this conversation and someone who has helped to drive the conversation. Kwaku, if you could please introduce us to yourself and please inform us as to how the AODIRF is leading the way and making sure that not just that communities have access to open data, but what are the tools that are necessary to accelerate the SDGs?

Kwaku Antwi:
Thank you, Yusuf, and hello to everybody. My name is Kwaku Antwi from the African Open Data and Internet Research Foundation. I’m in charge of the community outreach and projects and also in organizing events around open data initiatives across Africa through our network. I think one of the most important aspects we recognize in our current dispensation in this digital world, it’s being informed or being part of what is going on in our society. Data, as they say, is a new oil which is driving our economies. And being able to access data and utilizing data is also very important for all of us. I mean, as we speak now, there’s a lot of information ongoing as we are participating in this year’s IJF in Kyoto. And when we talk about data and open data, we talk about data which is available in formats which are easily accessible on portals or repositories which do not require enormous and mitigating. circumstances for you to not be able to access that data. Open data, we can say is one of the biggest drivers of open communities and also being able for people all across the world and in communities to be able to access information. One beautiful aspect about open data is that it encourages not just the private sector, government and all other sectors to be able to share their data, to be able to have people utilize this data for purposes and where we’re able to strengthen ourselves and also enforce where there are data gaps in which we can be able to share and also improve our societies. Well, in accessing this data, we all know that we’re in a digital world now and data is not just on hard copies in some libraries or some safe havens or safes where it is and you need to be able to have the other data which is internet connectivity to be able to access this data. And that’s where we also come in in which we are bridging this divide in terms of connectivity and setting up community networks and also helping the communities themselves to have the skills to set up a network, to have the skills to be able to utilize this data, interpret it and understanding the data for themselves and also being able to transmit the data in formats which are usable, acceptable and also safe for them. So those are my open remarks and I leave the floor for the rest of the panel.

Moderator – Yusuf Abdul-Qadir:
Thank you Kwaku. I want to jump to my colleague at Syracuse University, Dr. Lee McKnight. Lee, as Kwaku said, data is gold. It is valuable. Many companies are in an AI race right now where they’re leveraging data in ways that are helping to accelerate their economic opportunities but we’ve done work in the past around ensuring that not just that we ensure that data is accessible but that we preserve people’s rights. Can you talk about the relationship between expanding access and internet connectivity with ensuring that that data is governed properly and appropriately and can you lead us into some solutions onto how you manifest that in your work?

Lee Mcknight:
Thank you. Thanks Yusuf and thank you all for being here virtually or in person and engaging in this very important conversation. I want to recall back to 2008, IGF in Hyderabad, perhaps somewhere here, there or there at that time when the coalition, Dynamic Coalition on Internet Rights and Coalition on Internet Principles agreed that it didn’t make sense to have two coalitions on rights and principles but there really should just be one going forward. of the following year. Since then, there’s been a charter on internet rights and principles created. Following that, over work with you, we’ve taken that work forward on embedding in the virtual space rights and principles for governance, for whether it’s for data rights, for privacy, for security. That now has been extended closely with you, Yusuf, to smart cities and communities. Any village, any community can be a smart community, can have embedded in its governance framework rights and principles, including for data rights. So that’s going forward to the present, where now with the work also with the Africa Open Data and Internet Research Foundation on bringing connectivity to communities anywhere in the world, we can help ensure that the rights and privileges to citizens’ data are determined by those people who live there, and they’re not automatically harvested by external forces without the consent of the community.

Moderator – Yusuf Abdul-Qadir:
Thank you. I think along those lines, we have the honorable. Can we just say this again? Honorable Samuel Narti George, a member of parliament from Ghana here with us. And Dr. McNight explicitly mentioned the importance of ensuring nothing about us without us. In essence, that we should not be accessing and determining governance principles around data without ensuring that the communities who are directly impacted have not just a voice, but are driving the conversation. Can you talk a bit about the role that you as a member of parliament can play in ensuring that data governance is inclusive of the voices of your constituents, and the work that Ghana is doing to accelerate access to data and making sure that the data ecosystems are secure and respecting your citizens’ rights?

Samuel Nartey George:
All right, thank you very much. Good morning, good afternoon, good evening. depending on what part of the world you are in. I believe that the conversation about doing this for everyone, inclusive of everyone, is extremely critical. And for me, it highlights a major disconnect because we have this conversation about the West leaving Africa, but we don’t discuss the disconnects inside of our own countries in Africa between our capital cities and the rural communities that are underserved or unserved. Because governments and parliament has to take decision on the basis of data that’s generated. A lot of this data is generated from e-government portals and services that people access online. Now, the question you need to ask yourself is the connectivity in Accra, for example, is different from the connectivity in a rural community in the northern part of Ghana. And so that data that parliament or the Ministry of Finance is going to be using to advise parliament in terms of resource allocation is going to be skewed based on the data, the source of that data, which is skewed towards the urban areas where people have higher spending power and are able to buy data, because we joke about it, but possibly what I spend on data in a week is actually a whole family up north, the whole subsistence of that family of six people for a whole month. And so the question is, if data is not as cheap and accessible and platforms are not accessible, people are not contributing to the data pool. And so we need to look at the disconnect and the digital gaps inside of our own countries on the African continent between our urban areas and underserved areas. And that’s where the community networks coming. And that’s where you have in Ghana, for example, our universal access fund, GIFEC, trying to close that gap and do last mile connectivity. I keep saying that we have a lot of conversations on these platforms about connectivity, bridging the connection, the connectivity gap, but we’re not talking about whether that connectivity we’re bridging is actually accessible or affordable. Because it’s one thing to bring a network into a community, it’s another thing determining if it’s at a price that the individuals in that community can hook up to the service. Because if you don’t get the data from the people in the underserved area, we will continue to make decisions in parliaments, in capital cities, that are skewed away from the needs of the people on the ground. And so that’s where the real disconnect is, and that’s the real quagmire that I think we need to figure out, how do we get, because government is increasingly making its decisions on the back of data sets that are generated by people’s, by digital footprints of citizens. But if in our countries we have citizens who do not have a digital footprint, because they don’t have access to internet, or even when you bring internet at very economically affordable prices, the cost of smartphones is inhibitive, because there are various segments to connection. It’s the connectivity itself, then the cost of the connectivity, and then access to that connectivity on a device. And so for me, I’m beginning to champion a case of saying that, just like in pharmaceuticals, where you have generic drugs, because the big pharma, big pharma has made profit from its intellectual property, and so a drug that’s produced by Bayer or Pfizer could cost about $100 for a sachet, but I could get that same drug by an Indian generic maker, same efficacy, but not the same brand name for $5, and by doing that in pharmaceuticals, we should begin to do the same thing in technology, where the likes of Apple and Samsung have made a lot of money off the intellectual property, we should begin to have generic devices that are going to be cheaper, that are assembled on the African continent, and then would make it easier for people to have digital footprints, because a citizen without a digital footprint cannot be part of the data sets that government is using to take decisions for them.

Moderator – Yusuf Abdul-Qadir:
As I said, honorable. Dr. Uzma Alam here from Science for Africa Foundation. Join us virtually. Dr. Uzma, really appreciate you being here. As a public health practitioner, data is key to understanding how we can solve, especially in context of the pandemic that we’ve left, are kind of leaving, or may still be in, depending on where in the world you might be. Data has been tremendous in both deploying public health resources and understanding how we are going to be efficient in ensuring everyone is taken care of. Can you please share with us a bit about what you’re doing at Science for Africa Foundation and the role that data will play from a public health perspective?

Usman Alam:
Thank you for that. And greetings, everybody, from Nairobi, Kenya. And a big, big thank you for the organization. And it’s been really exciting for me to hear the panelists who came before me, because where we are, like where the Science for Africa Foundation plugs in, we are towards the end of, we would be benefiting from what some of the panelists have started doing, especially in health. So the Science for Africa Foundation, just for context, is a pan-African organization where we fund research and innovation across Africa. But we also work with designing programs and providing ecosystem strengthening. And within that, we have a science policy engagement portfolio. And that you pointed out, too, looks at how can we drive value from African-generated data, and how do we actually stimulate what the honorable member of parliament just mentioned? How do we ensure that Africa is responsible for generating its own data? But also, how do we govern that? And I think critical issues and threads of this have come up, but something I just would like to highlight for context of this conversation is, I’ve been hearing the word data, data, data. But I think within data, what our work’s pointing to and what I think the discourse should be focused upon, or to even answer your question directly, what will take us from data to impact, is really those nuances within data. And what do I mean by that, right? So yes, there is data, but there is this need for diversity, equity, and inclusion in data. As we’ve already talked about, you know, west-driven data, a footprint that doesn’t match the African content. But within that, we have our women. Let’s not forget them, a big piece, when it comes to health, and especially the next pandemics, right? And within that, there’s also this whole concept within Africa that we really need to, if we need to get from data to preventing the next pandemic, like you said, or even drive impact. is this piece around governance, right? So Africa, yes, needs to generate its own data and we need to be responsible for governing it. But there’s this piece that, you know, we need to appreciate that data is obviously cross-cutting, right, whether it’s health, whether it’s agriculture, whether it’s finance and stuff. And what some of our work has been pointing to, especially when we start looking at governance around data policies in Africa, you know, they’re housed very specifically within, for example, majority of the ICT, you know, equivalent ministries of health, right? I mean, ministries of ICT. And that obviously has implications for how somebody in health can access that data, even at a local level, even within governments, right? And there is this fine balance of what, you know, what the mission of one is, and, you know, what the mission of the other is. I think, you know, my rallying call to this conversation and, you know, to get from data to impact would be, yes, you know, we need equitable partnerships. We need, you know, locally generated data, and that includes devices for it. But we also need to be very careful of how we govern, that we do not start governing in silos, that, you know, when the data exists, we can’t even have access to it, even at a very high levels. I think I’ll stop at that and hand back to you, Yusef.

Moderator – Yusuf Abdul-Qadir:
Thank you, wow. I think we still have Victor Uhurugu on the line. Victor, please, if I’ve, again, mispronounced your name, I am a stickler for saying people’s name right, so please let me know if that’s the case. But if you can please talk to us about your work at the UN, and in particular, how the UN is trying to bridge the gap between all of the respective conversations we’ve had here. We have academia here, we have governments here, we have civil society here, and the UN plays an important role as a convener. What are you doing at the UN, and how do you see these issues of data governance, particularly for Africa, manifesting themselves in ways that have. help to facilitate the Sustainable Development Goals.

Victor Ohuruogu:
Thank you very much. Can you hear me? Can you hear me? Yes, yes, we can hear you. Yeah, I think I’m Victor, Victor Horogu. Yeah, so good morning, everyone. Of course, good evening for some of you in some other parts of the world. I’m Victor Horogu. I work with the Global Partnership for Sustainable Development Data, which is warehoused within the UN Foundation. The UN Foundation is where we’re currently seated. The Global Partnership is a growing network of over 600 participants or partners, which includes state actors and non-state actors. These non-state actors are civil society organizations, the private sector, research, academic institutions, developer communities across the world. And these actors are set across about 35 countries in Africa, Asia, Latin America, and the Caribbean. So we’re all collaborating together to accelerate progress on sustainable development and on the SDGs particularly, but through better data. So together with our set of partners, we have looked at and we’re collaborating across three key systemic issues that have been identified together with our partners. And one particular set of that issue focuses on timely data. We do believe, and we have seen across the world that governments particularly need data on a very timely basis to making decisions and enabling the various policy instruments that they do put together. But these governments are not having that quick access to information, to data. And so we’re helping governments to make use of both non-traditional data forms and technologies that could help them have the best and quickest access to this data. We also look at… the issues of inclusive data, where we want to see marginalized groups, you know, have more agency in the data value chain. People who, you know, have been left out. We’re ensuring that governments and all other actors within the data value chain can focus on this set of people. And the third component of our program looks at accountable data governance. You know, we’re trying to unlock the opportunities of data for all, making sure that data is well governed, you know, by certain standard principles. And so what do we do in a particular country? My role covers Africa pretty much, where we have seen that there is a huge issue around capacity, just understanding what data is across different levels, both in the political and technical space, understanding what type of data is needed, you know, to drive certain policy, you know, issues, understanding how to even use that data in itself is a major issue. And so we have various programs that focuses on building capacity in terms of, you know, understanding what type of data is needed, where to source that data, how that data can be used. And we’re working with both, you know, all the actors within the value chain, particularly governments, but ensuring that we can strengthen, you know, connection and partnership between the private sector and government to drive the agenda of data. You know, we want to see how that data is given its prominence within the political space. Particularly, it would, you know, of course, not be too surprising for many of you that a lot of government actors, you know, make decisions that are not data driven. You know, of course, politicians, you know, are pretty much focused on what, you know, will get them into the place that they need to be. But oftentimes, very many of them do not reckon with data. So how do we push the political profile of data within government, but also working with the technical, you know, level guys to ensure that they have access to the right data sets that they need to support government in their various decisions and policymaking process. And so I will look forward to, you know, how that, you know, we could have it. broad-based conversation that bring all of the actors together, and we could work, you know, with all of you to address, you know, the issue of access, availability, and the use of this particular data for policy and decision-making within the continent. Thank you very much.

Moderator – Yusuf Abdul-Qadir:
Thank you, and thank you for correcting me in the pronunciation of your name. You know, as we said, this conversation will be split into two. I want to advise those who are joining us virtually that our colleague Lahari Chowdhury will be able to collect your questions. If you are joining us virtually and you have any questions, Lahari Chowdhury will be able to collect your questions in chat. That way, we can make sure that we are including everyone in this conversation. So, as I said, the first component of this conversation has kind of been addressed. We’ve talked with our dynamic panelists here, and I want to jump to, and I think actually Victor helped us really transition into the second component, the second piece of our conversation, which is leveraging technology and community networks to make sure that everyone, making sure data gets to everyone. You know, Dr. McKnight, I’ll start with you first. I don’t want to assume we have, we’re all operating on the same set of understanding of what community networks are and how we can leverage technology to both advance community networks and make sure that data gets to everyone. So, can you do two things for me? Can you first explain what are community networks? How do we ensure that it can be utilized as a mechanism to ensure access to data for everyone? And then talk a bit about some of the work that you’re doing around this particular set of questions. Sure. Thank you so much, Yusuf.

Lee Mcknight:
So, first, we can think about community networks, and I would give a lot of big credit to the Internet Society for all of its advocacy and work over many years in encouraging people to think not just of telecommunications or national level networks, but the fact that people can, in fact, build and create their own local networks. And so, that work has been ongoing for some time. I wanted to bring in here one example, maybe as this transition from the first part of the conversation to the second, and I forget her name. I should remember her name, but the mayor of a Chilean community that was previously disconnected until there was a community network during the pandemic. She said, we exist. That’s now she’s part of the data pool. Yes, she has to have rights and be protected, but now her community, she exists in a way she didn’t before. So, community networks provide a way to now bring connectivity to people, the 2.5, 2.6 billion people that exist, but they’re not counted. They’re not included in any way, generally speaking, in our conversations because they cannot reach us digitally. All right. So, now how do we go about this today? There’s many different technologies available to create community networks, and that great work has been done for some time. We here at Syracuse University, working with the Worldwide Innovation Technology Entrepreneurship Club, or WITEC, over decades, have developed a package. small form where it’s not enough to have connectivity. If you don’t have energy, right, you cannot stay connected for very long. Your battery life. So having a package that includes both a connectivity solution and is a tiny little portable microgrid solar powered, that’s been something that we’ve been evolving and has been deployed into over 20 countries now and is currently in use in Ghana for connecting school children libraries and further as its first deployment was in the Democratic Republic of Congo. So it’s possible now. It’s not like, this is not theory. This is just something that we come see it in the exhibit booth. Otherwise, you could have a more established, a larger community network with established towers and so on, but you don’t necessarily need to create any new infrastructure. We’ve talked about being the academic here, infrastructuralist networks. So we can have an infrastructuralist network that is not just a network, it’s also a microgrid that exists, that you can go see it. So this is not theory, this is fact, and we can take this, there’s 2.6 billion people that need to be connected. They exist. I’ll stop there.

Moderator – Yusuf Abdul-Qadir:
Honorable, and thank you for that, Lee. I’m struck with the we exist comment that kind of struck a particular core with me. And Ubuntu is a concept on the continent of I am because we are this notion that we are, we have an obligation amongst and with each other. Can you talk a bit about just the way that we go beyond, and I thought you put it beautifully, beyond connecting people from a very kind of academic or kind of theoretical perspective, but what does that do to demonstrate that we exist? How do we unlock people’s fullest potentials by providing them the access to data as well as the internet?

Samuel Nartey George:
Well, it literally just transforms the world. the world. It changes the entire economics of that locality. And I’ll give you a typical example of something that a project that we’re toying with in Ghana at the moment. If we were able to connect an unconnected community and then we could send them educational material, a young man who he or she would have had to go to a city center to learn a trade or go to a master craftsman could actually with a smart phone take models in how to become a bricklayer or a mason or become a skilled laborer. And that gives him an employable skill. That puts food on his table. So there’s an ability to run blended learning platforms are critical. COVID taught us a lesson in Ghana where kids who were not connected to the national grid lost a year of school. If we had community networks, because we actually put educational material on the Internet and on national TV, but some of these communities had absolutely no connectivity, be it electricity, TV or Internet. And so the kids in those schools have lost a year of their lives thanks to no fault of theirs. Now, if you’re able to connect these communities, you transform the whole ecosystem there. Because there’s someone there who’s now going to be able to run a business center. It brings a whole new lease of life to the people in there. And so, I mean, most of us in those rooms, even in capital cities, we do a lot more with data on our phones than voice calls. Our lives revolve around data. And you can just imagine what happens if you don’t have data. The first thing people ask for when they walk into an establishment, especially for all of us who have traveled here, the first thing I did at the airport was not to change money. idea that the airport was to get a data seam. Because it’s the only way I can stay connected. It’s the only way I can stay productive. If I don’t have data, I’m cut out. And so if we’re able to bring people to a place where they’re connected, you actually open a whole new specter. You just need to see the excitement in communities that get connected to 3G for the first time from 2G. Because when they’re just doing voice, they have absolutely no connection to the internet superhighway. And now the internet is actually where everything happens. The young people who’ve graduated school in urban areas who are able to make a livelihood by trading on Facebook, being able to sell, they buy things, they have a Facebook page or an Instagram page, and they’re selling. Now imagine that there’s a young man in the rural community who also has the opportunity to now become the guy who, if you need anything from the major city, he goes to pick it up, puts a little margin on it, and people in that community can actually just deal with him on WhatsApp. It doesn’t even have to be on Instagram. He can run a business page on WhatsApp where he advertises his wares, and he can transact business there. So there is real economic power that exists when you give people connectivity. Because I mean, many of us take these connections for granted. We use them for TikTok and Instagram and Snapchat. But the internet has real economic power for people who are in the most difficult positions. And that’s the power of transformation that we can bring. When we let people realize the transformative positive impact of the internet, either for business or for educational purposes, there’s a real life opportunities that can change the whole specter for people. And when people get these skills, it’s now a digital world. He can be sitting in that village and doing data processing for a blue chip company in the United States and get paid for it. Because now he’s able to lend data processing. or learn coding online, those are all opportunities that he, the two, he would have to leave that community and travel to an urban area where he most likely has nobody and be exposed to all the vagaries. But you can bring the world into the small device in the hands of that young person so long as you give them a connectivity. And I think that it’s something that as governments and as parliaments, we need to begin to prioritize to identify these communities and begin to reach out to them as a matter of course. Because for the telcos, most of those communities don’t make economic sense for them to go into in the first place. Because they’re looking at the numbers, they’re looking at the cost of running their infrastructure. And so when I hear Doc talk about infrastructure-less connections, those are the kinds of connectivities that we need. And for the kids who are using those connections in Ghana to access educational material, that’s material they would never have been able to access. But kids who are gonna write the same end-of-year exams with them who are in urban areas have access to those same materials. So you have kids going to write the same exams but are completely disadvantaged from the get-go. Now how do they pass and compete with these kids in the urban areas for limited slots in public universities? So this just bridges the gap and creates a whole new vista for these young people in those communities. And that’s why it’s very imperative that we take this as a very serious point.

Moderator – Yusuf Abdul-Qadir:
You know, I will be very transparent and say that I am a professed, avowed, and committed Pan-Africanist. And Kwaku and I have had a number of conversations personally around Pan-Africanism and the role that it plays in both facilitating a brighter future for African descendants across the globe. But Kwaku, if you could talk to us a bit about what open data and community networks can mean for helping to not just facilitate the UN Sustainable Development Goals, for not just unlocking, as the Honorable mentioned, the fullest human potential. of each of us, but also to build a United States of Africa to kind of facilitate for this connected, inclusive continent.

Kwaku Antwi:
Thank you, Yusuf. And I think the Honorable Member of Parliament has given us a segment. I think Dr. McKnight also spoke about it. Basically, when we talk about open data and also the infrastructure where you’re able to access on portals and with technologies and the data, it’s important for us just not to think. I think Dr. Ousmane talked about silos, OK? To not think about our domains where we are looking to ignite or digitalize or to push forth for the technologies to apply. And we’re just talking about education, OK? But there is endless possibilities to the innovation of the technologies and the data. So for example, I’ll just give a very short example we had this year. This year, we did an Open Data Day in which we celebrated open data in Accra. And what we did is that we brought together persons from the statistical part with the open data. We brought people back from the private sector in terms of those who use geospatial data technologies. And then we brought the space science technology. And guess what happened? They were talking to people. They were talking to themselves in the room. They were doing very similar jobs, which required a lot of data from a disaster, from climate to economic data to private geospatial data for all sorts of purposes. But guess what? They were transforming our communities. And what was the connection? What is this connection? Is the connectivity to be able to connect to the internet? to be able to talk to people, to be able to exchange. Today, I’m able to connect to you from where I am in Accra, Ghana, due to internet connectivity. I’m knowledgeable, I have that information, I’m sharing with everybody because I’m connected. I’m connected because we are all in an open environment in which we can share information. And this information is being recorded and it’s gonna be reposited in a portal or at a storage place where everybody can access. And this is the power and transforming nature of internet connectivity and the power of data and information that it brings and the richness to which. In Africa, we have this potential. And yes, we should and we are able to transform our communities with this kind of data and information that we have. Because being here and being, having that internet backpack in Winneba or in Tamale or in Wa or in Ho, I should not just be able to connect with the school children who are in this community, but I can also connect across the country. Not only so, but the cloud infrastructure that is available for you to be able to access should you have also interactive capacity in which is not just for education, but also our healthcare facilities, our agricultural facilities, and also all other facilities who are able to connect as we are doing across the continent. And as Yusuf, it’s important that we recognize our capacities and being able to enhance it in African context to be able to connect everybody, as you said, Ubuntu. We are all moving together in this forward together and we go ahead with everybody ahead. Thank you.

Moderator – Yusuf Abdul-Qadir:
Thank you Kwaku. Dr. Uzma, you and then Victor will be the last two before we open it up for conversation with those of us in the audience. Folks who may have questions online. please do send them in chat and Lahari Chowdhury will make sure to get those questions to us. And for those who are in the room, please, you don’t have to run up to the mics, but you’re welcome to also join us for questions. Dr. Uzma, the question for you is centered around good health, well-being, gender equality, and let’s leave it with those two issues first. You know, we’ve talked about unlocking and unleashing everyone’s potential. We’ve talked about the way that connectivity can ensure people’s fullest potential can be maximized, but from a practical, pragmatic perspective, this can have significant implications on ensuring good health and well-being and gender equality for women and girls. Two of the 17 SDGs addressed there. Could you please lean in a bit about those two topics for us, because we don’t want to make sure, we want to make sure that we are not leaving out an explicit call out for gender equality, for making sure that women and girls are included as well as good health and well-being. Dr. Uzma.

Usman Alam:
Oh, thanks. Thank you for that question. I love it. It’s really got me more excited. So, you know, if we pin our work around those two pillars you’ve said, right, and then bring in this piece of connectivity, it’s actually what everybody has said, right? It’s life-changing. And it’s life-changing not only for the end users, but in honesty, it’s going to be all of us, but what it also does is it changes how research and innovation is done within the African context. And that’s, you know, very critical. If this dialogue has been saying, we need to drive our research agenda, you know, we need to drive our innovation, but then, you know, as soon as we start saying that, we need to start thinking how are we going to fund this, right? And the only way we can, you know, ensure that this, that we take off, not take up the boxes, but, you know, we leverage on all these different areas, whether it’s gender, whether it’s, you know, data, whether, you know, you want to call it connectivity, is through these linkages, right? And the way I would like to look at these linkages or connections is through community of practices, right? And just to give you a small example of what this means in. in practice in how it’s implemented, right? So we are very focused on when we fund for whether it’s research or innovation to fund within this hub and spoke model, right? And where you have a lead organization that works with, you know, other organizations around. And these can be, you know, the private sector, the academic sector, the government sector. And when we say lead institutions, you know, the importance for us was like, if we just look at the funding landscape, right, whether it’s for health, whether it’s for innovation, whether it’s for agriculture, finance, or whatever in Africa, you know, there are pockets of where the funding goes. We know South Africa is strong. We know Kenya is strong. We know some of the North African countries are strong, but Africa is huge and there’s capacity across our 54 member states. So to ensure that we leverage these, you know, our model says, or works around the philosophy of, all right, you know, you’re a stronger lead institution, but you need to partner with the other stakeholders and, you know, bring in these other players that wouldn’t have access to this, right? And what that all of a sudden does is creates equity for us, right? Whether or not only in how we fund, but also, you know, how we bring in our women leadership, how we bring in other stakeholders, how we connect government to researchers and to data. And for us, so that’s a big piece of equity. So connections and connectivity is a community of, you know, practice all of a sudden translates into equity. But another thing, you know, a critical thing, and I’m surprised it didn’t come in our first discussion is when we think of data, there are lots of trust issues. We need to be honest, right? Even within academics, even within the experts. But once you provide this framework for sharing knowledge, exchange connectivity, you know, there’s this trust built. So that already then translates into sustainability and sustainability is a big part of how health is going to play out on the continent. And I think one other piece that’s, you know, really powerful for us from the health perspective is, again, when you think of, you know, connecting that first conversation we had around data and, you know, the second conversation around connectivity, there’s this piece of endogenous knowledge, and there’s a lot of endogenous knowledge in Africa that’s not pinned upon. And just to give you a very quick example, so I remember responding to Ebola in, you know, Sierra Leone, Liberia, and Guinea. And I remember, you know, there was this whole thing about isolation and stuff, and, you know, having this academic conversation of how are we going to do this at GDC. But all of a sudden, because we had this connectivity and we weren’t really, you know, networked within networks of communities, you know, it was the endogenous knowledge that drove this. Somebody from Sarah Lone said, hey, you don’t need to do this. We already do this. We already have isolation centers for our women and children, you know, when they go through measles or when they go through menstruation and stuff. So we literally, that was knowledge, that was data that existed that we could leverage on. So I think, you know, just to end, it’s, you know, life transforming and for us and the end of implementation, it, you know, drives three things, equity, endogenous knowledge and trust.

Moderator – Yusuf Abdul-Qadir:
Wow. We have a question online and Victor, I’m gonna direct this question to you. It comes to us from Daria Tamereva. She’s notes and asks, how could we effectively implement data for the humanitarian sector? We, and then another question, I’ll say the second one with you honorable, which asks, we lack digital skills. What can be done to remedy this situation? So Victor, the first question for you is, how could we effectively implement data for the humanitarian sector? And then a question on digital skills and remedying that for you, honorable.

Victor Ohuruogu:
Yeah, thank you. Thank you so much. So how could we implement data for the humanitarian sector? So for us, we, one of the things that we, you know, come to discover is that the, within the principles of, you know, leaving no one behind, we’ve come to realize the fact that the development sector straddled between the public sector and the private sector seems to be very much advanced as the private sector is, but within the public sector space, there’s a lot of capacity issue. And so our concern is. how do we bridge this capacity issue that enables the public sector, particularly coordinating with the development space in the private sector to respond more effectively to humanitarian issues when and when they do break out in Africa. And so we’re looking at a couple of things, particularly within the public sector space to strengthen their capacity in responding to humanitarian issues. Of course, when COVID broke out, it has, of course, humanitarian dimensions to it. And what did we do with a couple of countries across Africa? One is the fact that we did discover that infrastructure was a major, major challenge for the public sector in responding to issues such as humanitarian challenges. Infrastructure, such even as computing infrastructure that enables them to bring together all the key information and datasets that enables them to make quick decisions. So we were working with a number of the presidential tax policies on COVID-19 to, of course, bringing our private sector partners, looking at where infrastructure is needed for immediate deployment and where causing that partnership to occur. Second is the issue of capacity, knowledge, and skill in using this infrastructure and data to making the decisions that governments are needed to make at that point in time. And then we identify what are those capacity-related issues and brought together a set of our partners who are helping to train public sector officials across Africa in identifying and using the type of data that is needed to support governments in making decisions. We have partners like Grid3, who was working with a number of health institutions, the national statistical offices across Africa in a couple of countries to bring the sort of datasets that they need, both from the economic side, from the health side, you know, just, you know, mashing this data. data sets together to provide analytics and insights to enable government to making decision. The third area was the area of the capacity around understanding sovereignty issues with respect to data. This data that is needed for decision purposes are being collected from a set of people across various communities. We were opening the eyes of government to ensuring the fact that these people whose data is gonna be collected and used must have a say in the way that data is being collected from them and in the way that that data would eventually be used. And so we wanted to ensure that everyone whose data is being used must have a say, they have a right, their rights as well must be protected. But more importantly also we saw that a lot of folks from outside of Africa were all jumping into Africa, demanding for data from government, using that data without recourse to certain principles, sovereignty principles in those locations. And so we’re opening the eyes of government and strengthening the capacity to manage these data sets in terms of understanding that the first issue has to do with ownership. Government within those spaces where those data sets are being collected must exercise ownership over such data. We also wanted to be sure that those data sets need to also be located within the confines of those countries. There were countries that were willing to work but they insisted that the data sets must not leave the shores of their country, they must be used for the purposes for which they were collected for. And of course the issues of privacy and protection, how data moves from one country to the other, even building the capacity of governments in terms of governing that whole data space itself was a major issue. Governments across Africa in most instances, the public sector institutions that handle these processes don’t have real capacity within the issues of data governance. So we’re also helping to build those capacities to ensure that the data sets that are gonna be used are limited, you know, to the borders of that nation, but of course also are very much, you know, the people comply with the protection laws, the confidentiality issues that that data brings upon all of us. So in a nutshell, the humanitarian sector needs data and critically, how do we help them is also making sure that the data that they need can be available. It’s accessible in formats that they can take and use this data for quick decision-making. And so we are hoping that one of the things that governments in Africa would really focus on is in data infrastructure that enables data systems from across various institutions to be connected together and to grant, you know, better access to everyone, particularly within the government sector who needs to use these data for policy and decision-making. Thank you.

Moderator – Yusuf Abdul-Qadir:
Honorable, before you jump in, are there questions in the, if you have questions in the room, please line up and we’ll make sure to get you honorable and then we’ll have these two questions.

Samuel Nartey George:
Yes, please. Sorry, oh. Okay, I’m just gonna keep it short so you don’t have to stand for too long. Basically, when it comes to education, it’s a very simple process. We need to be able to work with civil society and the technical society to build the capacity and do the training programs. Members of parliament need to be able to, on our own, we can offer the training, we can give the training, but we can partner with civil society organizations and technical society to bring the skill sets to our constituents. So for example, I could put a quota from my constituency development fund towards acquisition of the skills and then run them as boot camps so that the local constituents are able to get some of these digital skills. So I think it’s going to, and then that’s just at the level of the member of parliament, but government as a whole must begin to look at how it can also run these programs. In Ghana, for example, we have the Girls in ICT program, which focuses on girls in. in second cycle schools, takes basically a road show to the schools, trains them in basic code writing and hacking and ethical hacking, and gives them some kind of digital skills. And so governments can be able to invest in those kinds of programs as well. But ultimately, the resource must come from private sector, civil society. We must be able to build those synergies and give those skills to the people.

Audience:
Thank you. My name is Jarell James. I founded something called the Internet Alliance. And I’ve done a number of cryptography projects before this. I work in a deeply technical field on emerging technologies around cryptography for quite a while. And so I’d like to ask a question specifically around leverage. And as someone who is a devout Pan-Africanist, I do think this is very relevant. When we talk about like Walter Rodney or Thomas Sankara, do we think that their values around African intellectualism, African ingenuity, and valuing that as a resource, do we feel that that is reflected in the way that African nations are leading their countries with development around data sharing? When Victor spoke about not letting data leave borders, I mean, in a lot of ways, when we work in cryptography, it’s like data can leave borders. But it can only be accessed by those who have direct cryptographic key access to that data. And there is this idea that Africans are just supposed to share everything that they have for the sake of development, for the sake of checks, for the sake of investment from expat. So my question is really around community networks and all of this stuff is really great. But is there ways and better approaches to leveraging data as an actual asset? Leveraging the fact that GSMA predicts the greatest boom and growth in internet users is going to be Africa and is Africa year over year. So when someone comes in and builds telecommunications infrastructure, and they choose not to go to a region that seems to be not profitable, do we feel that there is? room there to say, well, if you want access to this data, we have leverage over you. Can we develop more? I just want to hear some thoughts on this. Because it seems like we’re operating from a very subservient position there.

Moderator – Yusuf Abdul-Qadir:
I love the question. I’m going to take the second one, and then we’ll try to get both.

Audience:
OK. Sorry. OK, so my name is Emmanuel from Togo. So mine is more like a contribution, because recently I worked with APC and other organizations on a report regarding data protection on the continent. And what we noticed is that on the continent, we are leapfrogging. Like, we are collecting too much data. I mean, in our countries now, you see the telcos are collecting. The hospitals are collecting. Everybody’s collecting data. So the consequences in the future can be very huge with all the emerging technologies we are seeing today, like AI. So the consequences for the continent we have to be careful. They will be very, very huge. And it is important for us to develop our data protection laws on a human-based approach, human-rights-based approach. Because most data protection laws on the continent today are not developed on a human-rights-based approach. And by human-rights principles, I mean the accountability. There are a lot of data breaches in Africa today, but who do we hold accountable? So there’s that accountability. There’s also the discrimination, equality, empowerment, and legality. I know a lot of countries in Africa today are actually enacting data protection laws for the sake of a check. So it’s something that we have to actually also see that, is it really necessary for us to collect this kind of data? Because we collect two main data. If I take the voters, for example, in my country, they collect their biometric data. They take their picture. They take their 10 fingers. They take all those data, but the government does not have any infrastructure locally to store those data. So those data are somewhere in Belgium with a private company. Nobody has access to those contracts to see the accountability level of those type of contracts. So 3 million voters have their data with a private company somewhere in the world. So those are some of the aspects that we have to also look at. I know in West Africa now, they are building a regional data registry for countries like Benin, Burkina Faso, Togo. where the World Bank has actually put in more than 300 million dollars to actually build that registry. But the problem is that our government usually, because if I take the case of Togo, they took that check, and before taking the check, the prerequisite was to vote a data protection law. They voted the law, but there’s no agency to implement the law. So there’s no need to have a law if we cannot implement it. So those are some of the things that we have to look at when we are actually voting those laws. We have to be able to implement it. We have to be able to actually fight for our data rights. We have to know who has access to it, to what level, can we correct it, and all those kind of mechanisms, we have to put them in place before going for those checks. Thank you.

Moderator – Yusuf Abdul-Qadir:
Honorable, if we could do this in a rapid fire, and I would be remiss if we didn’t afford the folks who asked the questions to continue the discussion. So we’ll continue the conversation. We’ll probably have to walk outside to do it, but we’d be happy to do so. Honorable.

Samuel Nartey George:
I honestly wish these questions came like 30 minutes ago, and I agree with you. Africa doesn’t know what we’re sitting on. We’re being exploited. Most times when we talk about exploitation in Africa, we think of just the natural resources, but data is being exploited big time. It’s being exploited because the big demographics are sitting on the African continent, and we have leaders who just don’t understand the whole economics of data, and it’s a big problem. And I think that there’s an awakening coming, and the point you just made, and this morning’s parliamentary track, it was a point I made to the panel. I said to them, it appears as though we come to these platforms, and there’s a checkbox that countries need to tick. So we need to have data protection laws. We need to have cybersecurity legislation. We run back, we go past the legislation, and then we get good ratings by international organizations. What they don’t do is then find out, Africa has some of the best. legislation, but implementation is zero. So there should actually be a matrix of checking implementation of legislation that’s been passed. Because, for example, you have Egypt that has a data protection law, but there is zero implementation of data protection in Egypt. And so there is no value to the citizenry there. Nigeria had a data protection law and just only three months ago set up a commission. So there are real issues here, but the international community is interested in saying, oh, this country has passed the data protection law, they’re doing a great job. And because we want to please Western capitals, because of the corruption of African leaders, we’re unable to actually deal with what is really requisite. But I think we’re running out of time. We’ll continue this conversation, but I think that we need a new generation of African leadership that knows that our data is critical and we need to hold it.

Moderator – Yusuf Abdul-Qadir:
With that, we are at time. I want to thank the panelists here for all of the conversation that you helped to drive us towards. Give the panelists a round of applause. For the folks who are online, thank you for joining. Unfortunately, we’ve got to go, but we will continue the conversation and we look forward to having you all join us at our booth. It’s in the main event hall, excuse me, in the main exhibition hall. You can find us at ACIP.org. We’re next to the kimonos, apparently, so grab a kimono and talk with us. And Dr. Smith, did you want to say anything before we close?

Dr. Smith:
Well, yeah, I just wanted to say quickly to the brother, because Pan-Africanism is about recognizing our humanity. The idea of a United States of Africa relates to your question, and I don’t think it’s as futuristic as it seems. It’s actually an idea that has been talked about from the OAU. So I think Walter Rodney’s idea of Pan-Africanism is really Africans at the grassroots level. While we need the politicians, they have the responsibility. and roles, we cannot achieve this goal of a global Africa, of a United States of Africa, without the mass mobility of the young people, of grassroots people. And it’s important to leverage the technologies that we have to move this social movement of a United States of Africa forward. And we can achieve it.

Moderator – Yusuf Abdul-Qadir:
Perfect ending. Thank you. Thank you. Thank you. Thank you. Thank you. you

Audience

Speech speed

198 words per minute

Speech length

937 words

Speech time

285 secs

Dr. Smith

Speech speed

136 words per minute

Speech length

470 words

Speech time

207 secs

Kwaku Antwi

Speech speed

165 words per minute

Speech length

1085 words

Speech time

396 secs

Lee Mcknight

Speech speed

166 words per minute

Speech length

750 words

Speech time

272 secs

Moderator – Yusuf Abdul-Qadir

Speech speed

191 words per minute

Speech length

2192 words

Speech time

689 secs

Samuel Nartey George

Speech speed

186 words per minute

Speech length

2513 words

Speech time

809 secs

Usman Alam

Speech speed

195 words per minute

Speech length

1538 words

Speech time

472 secs

Victor Ohuruogu

Speech speed

180 words per minute

Speech length

1768 words

Speech time

589 secs

Policy Network on Artificial Intelligence | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sarayu Natarajan

Generative AI, a powerful technology that enables easy content generation, has resulted in the widespread production and dissemination of misinformation and disinformation. This has negative effects on society as false information can be easily created and spread through the internet and digital platforms. However, the rule of law plays a crucial role in curbing this spread of false information. Concrete legal protections are necessary to address the issue effectively.

Sarayu Natarajan advocates for a context-specific and rule of law approach in dealing with the issue of misinformation and disinformation. This suggests that addressing the problem requires understanding the specific context in which false information is generated and disseminated and implementing legal measures accordingly. This approach acknowledges the importance of tailored solutions based on a solid legal framework.

The labour-intensive task of AI labelling, crucial for the functioning of generative AI, is often outsourced to workers in the global south. These workers primarily label data based on categories defined by Western companies, which can introduce bias and reinforce existing power imbalances. This highlights the need for greater inclusivity and diversity in AI development processes to ensure fair representation and avoid perpetuating inequalities.

Efforts are being made to develop large language models in non-mainstream languages, allowing a wider range of communities to benefit from generative AI. Smaller organizations that work within specific communities are actively involved in creating these language models. This represents a positive step towards inclusivity and accessibility in the field of AI, particularly in underrepresented communities and non-mainstream languages.

Mutual understanding and engagement between AI technology and policy domains are crucial for effective governance. It is essential for these two disciplines to communicate with each other in a meaningful way. Creating forums that facilitate non-judgmental discussions and acknowledge the diverse empirical starting points is critical. This allows for a more integrated and collaborative approach towards harnessing the benefits of AI technology while addressing its ethical and societal implications.

While AI developments may lead to job losses, particularly in the global north, they also have the potential to generate new types of jobs. Careful observation of the impact of AI on employment is necessary to ensure just working conditions for workers worldwide. It is important to consider the potential benefits and challenges associated with AI technology and strive for humane conditions for workers in different parts of the world.

In conclusion, the advent of generative AI has made it easier and cheaper to produce and disseminate misinformation and disinformation, posing negative effects on society. However, the rule of law, through proper legal protections, plays a significant role in curbing the spread of false information. A context-specific and rule of law approach, advocated by Sarayu Natarajan, is key to effectively addressing this issue. Inclusivity, diversity, and mutual understanding between AI technology and policy domains are crucial considerations in the development and governance of AI. It is essential to closely monitor the impact of AI on job loss and ensure fair working conditions for all.

Shamira Ahmed

The analysis focuses on several key themes related to AI and its impact on various aspects, including the environment, data governance, geopolitical power dynamics, and historical injustices. It begins by highlighting the importance of data governance in the intersection of AI and the environment. This aspect is considered to be quite broad and requires attention and effective management.

Moving on, the analysis advocates for a decolonial-informed approach to address power imbalances and historical injustices in AI. It emphasizes the need to acknowledge and rectify historical injustices that have shaped the global power dynamics related to AI. By adopting a decolonial approach, it is believed that these injustices can be addressed and a more equitable and just AI landscape can be achieved.

Furthermore, the analysis highlights the concept of a just green digital transition, which is essential for achieving a sustainable and equitable future. This transition leverages the power of AI to drive responsible practices for the environment while also promoting economic growth and social inclusion. It emphasizes the need for a balanced approach that takes into account the needs of the environment and all stakeholders involved.

In addition, the analysis underscores the importance of addressing historical injustices and promoting interoperable AI governance innovations. It emphasizes the significance of a representative multi-stakeholder process to ensure that the materiality of AI is properly addressed and that all voices are heard. By doing so, it aims to create an AI governance framework that is inclusive, fair, and capable of addressing the challenges associated with historical injustices.

Overall, the analysis provides important insights into the complex relationship between AI and various domains. It highlights the need to consider historical injustices, power imbalances, and environmental concerns in the development and deployment of AI technologies. The conclusions drawn from this analysis serve as a call to action for policymakers, stakeholders, and researchers to work towards a more responsible, equitable, and sustainable AI landscape.

Audience

The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge highlighted was the difficulty in capacity building due to the rapidly changing nature of AI. It was observed that AI is more of an empirical science than an engineering product, meaning that researchers and designers often don’t know what to expect due to continual testing and experimentation. Misinformation and the abundance of information sources further exacerbate the challenges in capacity building.

The importance of providing education to a diverse range of demographics, from school children to the elderly, was also emphasised. It was recognised that ensuring high-quality education in the field of AI is vital in equipping individuals with the knowledge and skills required to navigate the rapidly evolving technological landscape. This education should be accessible to all, regardless of their age or background.

Additionally, the panel discussion shed light on the blurring boundaries between regulatory development and technical development in AI and other digital technologies. It was noted that the political domain of regulatory development and the technical domain of standards development are increasingly overlapping in the field of AI. This convergence presents unique challenges that necessitate a thoughtful approach to ensure both regulatory compliance and technical excellence.

Furthermore, the role of standards in executing regulations in the context of AI was discussed. The panel emphasised that standards are becoming an essential tool for implementing and enforcing regulations. Developing and adhering to standards can help address challenges such as interoperability, transparency, and accountability in AI systems.

The need for capacity building was also emphasised, allowing a broader stakeholder community to engage in the technical aspects of AI, which have become integral to major policy tools. The panel acknowledged that empowering a diverse and inclusive group of stakeholders, including policymakers, experts, civil society representatives, academics, and industry professionals, is crucial for the development and governance of AI technology.

The process of contributing to AI training and education through UNESCO was discussed, highlighting the involvement of a UNESCO member who distributes AI research materials and textbooks to universities, particularly in developing countries. This partnership and knowledge-sharing initiative aim to bridge the global education gap and ensure that AI education is accessible to all.

The assessment of AI systems was deemed crucial, with recognition that assessing non-technical aspects is as important as evaluating technical performance. This includes considering the wider societal impact, such as potential consequences on workers and the categorisation of people. The panel emphasised the need for assessment processes to go beyond technical measures and include potential unintended consequences and ethical considerations.

Furthermore, it was acknowledged that the assessment of AI systems should extend beyond their current context and consider performance in future or “unbuilt” scenarios. This reflects the need to anticipate and mitigate potential negative outcomes resulting from the deployment of AI technology and to ensure its responsible development and use.

In conclusion, the panel discussion provided valuable insights into the challenges and opportunities associated with AI technology. The rapidly changing nature of AI necessitates continuous capacity building, particularly in the education sector, to equip individuals with the necessary skills and knowledge. Moreover, the convergence of regulatory and technical development in AI requires a thoughtful and inclusive approach, with standards playing a critical role in regulatory compliance. The assessment of AI systems was identified as a key area, underscoring the importance of considering non-technical aspects and potential societal impacts. Overall, the discussion emphasised the need for responsible development, governance, and stakeholder engagement to harness the potential of AI technology while mitigating its risks.

Nobuo Nishigata

The analysis reveals several key points regarding AI governance. Firstly, it emphasizes the importance of striking a balance between regulation and innovation in AI initiatives. This suggests that while regulations are necessary to address concerns and ensure ethical practices, there should also be room for innovation and advancement in the field.

Furthermore, the report highlights the need for AI policy development to take into consideration perspectives and experiences from the Global South. This acknowledges the diverse challenges and opportunities that different regions face in relation to AI adoption and governance.

The analysis also discusses the dual nature of AI technology, presenting both risks and opportunities. It underscores the significance of discussing uncertainties and potential risks associated with AI, alongside the numerous opportunities it presents. Additionally, it highlights the potential of AI to significantly contribute to addressing economic and labor issues, as evidenced by Japan considering AI as a solution to its declining labour force and sustaining its economy.

Another noteworthy point raised in the analysis is the recommendation to view AI governance through the Global South lens. This suggests that the perspectives and experiences of developing nations should be taken into account to ensure a more inclusive and equitable approach to AI governance.

The analysis also provides insights into the ongoing Hiroshima process focused on generative AI. It highlights that discussions within the G7 delegation task force are centred around a code of conduct from the private sector. Notably, the report suggests support for this approach, emphasising the importance of a code of conduct in addressing concerns such as misinformation and disinformation.

Flexibility and adaptability in global AI governance are advocated for in the analysis. It argues that AI is a rapidly evolving field, necessitating governance approaches that can accommodate changing circumstances and allow governments to tailor their strategies according to their specific needs.

Collaboration and coordination between organisations and governments are seen as crucial in AI policy-making, skills development, and creating AI ecosystems. The analysis suggests that international collaborations are needed to foster beneficial AI ecosystems and capacity building.

The importance of respecting human rights, ensuring safety, and fostering accountability and explainability in AI systems are also highlighted. These aspects are considered fundamental in mitigating potential harms and ensuring that AI technologies are used responsibly and ethically.

In addition to these main points, the analysis touches upon the significance of education and harmonisation. It suggests that education plays a key role in the AI governance discourse, and harmonisation is seen as important for the future.

Overall, the analysis brings attention to the multifaceted nature of AI governance, advocating for a balanced approach that takes into account various perspectives, fosters innovation, and ensures ethical and responsible practices. It underscores the need for inclusive and collaborative efforts to create effective AI policies and systems that can address the challenges and harness the opportunities presented by AI technology.

Jose

The analysis of the speakers’ points highlights several important issues. Representatives from the Global South stress the importance of gaining a deeper understanding of movements and policies within their regions. This is crucial for fostering an inclusive approach to technology development and governance.

A significant concern raised in the analysis is the intricate link between labour issues and advancements in the tech industry. In Brazil, for instance, there has been a rise in deaths among drivers on delivery platforms, which is attributed to the pressure exerted by new platforms demanding different delivery times. This highlights the need to address the adverse effects of tech advancements on workers’ well-being.

The impact of the tech industry on sustainability is another topic of debate in the analysis. There are concerns about the interest shown by tech leaders in Bolivia’s minerals, particularly lithium, following political instability. This raises questions about responsible consumption and production practices within the tech industry and the environmental consequences of resource extraction.

The use of biometric systems for surveillance purposes comes under scrutiny as well. In Brazil, the analysis reveals that the criminal system’s structural racism is being automated and accelerated by these technologies. This raises concerns about the potential for discriminatory practices and human rights violations resulting from the use of biometric surveillance.

There is a notable push for banning certain systems in Brazil, as civil society advocates for regulations to protect individuals’ rights and privacy in the face of advancing technology. This highlights the need for robust governance and regulation measures in the tech industry to prevent harmful impacts.

The global governance of AI is also a point of concern. The analysis highlights the potential risk of a race to the bottom due to geopolitical competition and various countries pushing their narratives. This emphasizes the importance of global collaboration and cooperation to ensure ethical and responsible use of AI technologies.

Countries from the global south argue for the need to actively participate and push forward their interests in the governance of AI technologies. Forums like BRICS and G20 are suggested as platforms to voice these concerns and advocate for more inclusive decision-making processes.

The analysis also sheds light on the issue of inequality in the global governance of technology. It is observed that certain groups seem to matter more than others, indicating the presence of power imbalances in decision-making processes. This highlights the need for addressing these inequalities and ensuring that all voices are heard and considered in the governance of technology.

Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on indigenous groups. The example of the Anamames ethnicity in the Brazilian Amazon suffering due to the activities of illegal gold miners underscores the need for responsible and sustainable practices in resource extraction to protect the rights and well-being of indigenous populations.

Lastly, tech workers from the global south advocate for better working conditions and a greater say in the algorithms and decisions made by tech companies. This emphasizes the need for empowering workers and ensuring their rights are protected in the rapidly evolving tech industry.

In conclusion, the analysis of the speakers’ points highlights a range of issues in the intersection of technology, governance, and the impacts on various stakeholders. It underscores the need for deeper understanding, robust regulation, and inclusive decision-making processes to tackle challenges and ensure that technology benefits all.

Moderator – Prateek

The Policy and Analysis Initiative (P&AI) is a newly established policy network that focuses on addressing matters related to AI and data governance. It originated from discussions held at the IGF 2020-2022 in Addis Ababa. The P&AI aims to tackle policy issues relating to AI and data governance, and has recently released its first report.

The first report produced by the P&AI is a collaborative effort and sets out to examine various aspects of AI governance. It specifically focuses on the AI lifecycle for gender and race inclusion and outlines strategies for governing AI to ensure a just twin transition. The report takes into account different regulatory initiatives on artificial intelligence from various regions, including those from the Global South.

One of the noteworthy aspects of the P&AI is its working spirit and commitment to a multi-stakeholder approach. The working group of P&AI was formed in the true spirit of multi-stakeholderism at the IGF, and they collaborated closely to draft this first report. This approach ensures diverse perspectives and expertise are considered in shaping the policies and governance frameworks related to AI.

Prateek, an individual interested in understanding the connectivity between AI governance and internet governance, sought insights on the interoperability of these two domains. To gain a better understanding of the implications of internet governance on AI governance, Prateek engaged Professor Xing Li and requested a comparison between the two in terms of interoperability.

During discussions, Jose highlighted the need for a deeper understanding of local challenges faced in the Global South in relation to AI. This includes issues concerning labor, such as the impacts of the tech industry on workers, as well as concerns surrounding biometric surveillance and race-related issues. Jose called for more extensive debates on sustainability and the potential risks associated with over-reliance on technological solutions. Additionally, Jose stressed the underrepresentation of the Global South in AI discussions and emphasized the importance of addressing their specific challenges.

In the realm of AI training and education, Pradeep mentioned UNESCO’s interest in expanding its initiatives in this area. This focus on AI education aligns with SDG 4: Quality Education, and UNESCO aims to contribute to this goal by providing enhanced training programs in AI.

In a positive gesture of collaboration and sharing information, Prateek offered to connect with an audience member and provide relevant information about UNESCO’s education work. This willingness to offer support and share knowledge highlights the importance of partnerships and collaboration in achieving the goals set forth by the SDGs.

In conclusion, the Policy and Analysis Initiative (P&AI) is a policy network that aims to address AI and data governance matters. Their first report focuses on various aspects of AI governance, including gender and race inclusion and a just twin transition. Their multi-stakeholder approach ensures diverse perspectives are considered. Discussions during the analysis highlighted the need to understand local challenges in the Global South, the significance of AI education, and the connectivity between AI and internet governance. Collaboration and information sharing were also observed, reflecting the importance of partnerships in achieving the SDGs.

Maikki Sipinen

The Policymakers’ Network on Artificial Intelligence (P&AI) is a relatively new initiative that focuses on addressing policy matters related to AI and data governance. It emerged from discussions held at the IGF 2020-2022 meeting in Addis Ababa last year, where the importance of these topics was emphasised. The P&AI report, which was created with the dedication of numerous individuals, including the drafting team leaders, emphasises the significance of the IGF meetings as catalysts for new initiatives like P&AI.

One of the key arguments put forward in the report is the need to introduce AI and data governance topics in educational institutions. The reasoning behind this is to establish the knowledge and skills required to navigate the intricacies of AI among both citizens and the labor force. The report points to the success of the Finnish AI strategy, highlighting how it managed to train over 2% of the Finnish population in the basics of AI within a year. This serves as strong evidence for the feasibility and impact of introducing AI education in schools and universities.

Another argument highlighted in the report involves the importance of capacity building for civil servants and policymakers in the context of AI governance. The report suggests that this aspect deserves greater focus and attention within the broader AI governance discussions. By enhancing the knowledge and understanding of those responsible for making policy decisions, there is an opportunity to shape effective and responsible AI governance frameworks.

Diversity and inclusion also feature prominently in the report’s arguments. The emphasis is on the need for different types of AI expertise to work collaboratively to ensure inclusive and fair global AI governance. By bringing together individuals from diverse backgrounds, experiences, and perspectives, the report suggests that more comprehensive and equitable approaches to AI governance can be established.

Additionally, the report consistently underscores the significance of capacity building throughout all aspects of AI and data governance. It is viewed as intrinsically linked and indispensable for the successful development and implementation of responsible AI policies and practices. The integration of capacity building recommendations in various sections of the report further reinforces the vital role it plays in shaping AI governance.

In conclusion, the P&AI report serves as a valuable resource in highlighting the importance of policy discussions on AI and data governance. It emphasises the need for AI education in educational institutions, capacity building for civil servants and policymakers, and the inclusion of diverse perspectives in AI governance discussions. These recommendations contribute to the broader goal of establishing responsible and fair global AI governance frameworks.

Owen Larter

The analysis highlights several noteworthy points about responsible AI development. Microsoft is committed to developing AI in a sustainable, inclusive, and globally governed manner. This approach is aligned with SDG 9 (Industry, Innovation and Infrastructure), SDG 10 (Reduced Inequalities), and SDG 17 (Partnerships for the Goals). Microsoft has established a Responsible AI Standard to guide their AI initiatives, demonstrating their commitment to ethical practices.

Owen, another speaker in the analysis, emphasises the importance of transparency, fairness, and inclusivity in AI development. He advocates for involving diverse representation in technology design and implementation. To this end, Microsoft has established Responsible AI Fellowships, which aim to promote diversity in tech teams and foster collaboration with individuals from various backgrounds. The focus on inclusivity and diversity helps to ensure that AI systems are fair and considerate of different perspectives and needs.

Additionally, open-source AI development is highlighted as essential for understanding and safely using AI technology. Open-source platforms enable the broad distribution of AI benefits, fostering innovation and making the technology accessible to a wider audience. Microsoft, through its subsidiary GitHub, is a significant contributor to the open-source community. By embodying an open-source ethos, they promote collaboration and knowledge sharing, contributing to the responsible development and use of AI.

However, it is crucial to strike a balance between openness and safety/security in AI development. Concerns exist about the trade-off between making advanced AI models available through open-source platforms versus ensuring the safety and security of these models. The analysis suggests a middle-path approach, promoting accessibility to AI technology without releasing sensitive model weights, thereby safeguarding against potential misuse.

Furthermore, the need for a globally coherent framework for AI governance is emphasised. The advancement of AI technology necessitates establishing robust regulations to ensure its responsible and ethical use. The conversation around global governance has made considerable progress, and the G7 code of conduct, under Japanese leadership, plays a crucial role in shaping the future of AI governance.

Standards setting is proposed as an integral part of the future governance framework. Establishing standards is essential for creating a cohesive global framework that promotes responsible AI development. The International Civil Aviation Organization (ICAO) is highlighted as a potential model, demonstrating the effective implementation of standards in a complex and globally interconnected sector.

Understanding and reaching consensus on the risks associated with AI is also deemed critical. The analysis draws attention to the successful efforts of the Intergovernmental Panel on Climate Change in advancing understanding of risks related to climate change. Similarly, efforts should be made to comprehensively evaluate and address the risks associated with AI, facilitating informed decision-making and effective risk mitigation strategies.

Investment in AI infrastructure is identified as crucial for promoting the growth and development of AI capabilities. Proposals exist for the creation of public AI resources, such as the National AI Research Resource, to foster innovation and ensure equitable access to AI technology.

Evaluation is recognised as an important aspect of AI development. Currently, there is a lack of clarity in evaluating AI technologies. Developing robust evaluation frameworks is crucial for assessing the effectiveness, reliability, and ethical implications of AI systems, enabling informed decision-making and responsible deployment.

Furthermore, the analysis highlights the importance of social infrastructure development for AI. This entails the establishment of globally representative discussions to track AI technology progress and ensure that the benefits of AI are shared equitably among different regions and communities.

The analysis also underscores the significance of capacity building and actions in driving AI development forward. Concrete measures should be taken to bridge the gap between technical and non-technical stakeholders, enabling a comprehensive understanding of the socio-technical challenges associated with AI.

In conclusion, responsible AI development requires a multi-faceted approach. It involves developing AI in a sustainable, inclusive, and globally governed manner, promoting transparency and fairness, and striking a balance between openness and safety/security. It also necessitates the establishment of a globally coherent framework for AI governance, understanding and addressing the risks associated with AI, investing in AI infrastructure, conducting comprehensive evaluations, and developing social infrastructure. Capacity building and bridging the gap between technical and non-technical stakeholders are crucial for addressing the socio-technical challenges posed by AI. By embracing these principles, stakeholders can ensure the responsible and ethical development, deployment, and use of AI technology.

Xing Li

The analysis explores various aspects of AI governance, regulations for generative AI, the impact of generic AI on the global south, and the need for new educational systems in the AI age. In terms of AI governance, the study suggests that it can learn from internet governance, which features organisations such as IETF for technical interoperability and ICANN for names and number assignments. The shift from a US-centric model to a global model in internet governance is viewed positively and can serve as an example for AI governance.

The discussion on generative AI regulations focuses on concerns that early regulations may hinder innovation. It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing generative AI. Striking a balance between regulation and fostering innovation is of utmost importance.

The analysis also highlights the opportunities and challenges presented by generic AI for the global south. Generic AI, consisting of algorithms, computing power, and data, has the potential to create new opportunities for development. However, it also poses challenges that need to be addressed to fully leverage its benefits.

Regarding education, the study emphasises the need for new educational systems that can adapt to the AI age. Outdated educational systems must be revamped to meet the demands of the digital era. Four key educational factors are identified as important in the AI age: critical thinking, fact-based reasoning, logical thinking, and global collaboration. These skills are essential for individuals to thrive in an AI-driven world.

Finally, the analysis supports the establishment of a global AI-related education system. This proposal, advocated by Stanford University Professor Fei-Fei Li, is seen as a significant step akin to the creation of modern universities hundreds of years ago. It aims to equip individuals with the necessary knowledge and skills to navigate the complexities and opportunities presented by AI.

In conclusion, the analysis highlights the importance of drawing lessons from internet governance, balancing regulations to foster innovation in generative AI, addressing the opportunities and challenges of generic AI in the global south, and reimagining education systems for the AI age. These insights provide valuable considerations for policymakers and stakeholders shaping the future of AI governance and its impact on various aspects of society.

Jean Francois ODJEBA BONBHEL

The analysis provides different perspectives on the development and implementation of artificial intelligence (AI). One viewpoint emphasizes the need to balance the benefits and risks of AI. It argues for the importance of considering and mitigating potential risks while maximizing the advantages offered by AI.

Another perspective highlights the significance of accountability in AI control. It stresses the need to have mechanisms in place that hold AI systems accountable for their actions, thereby preventing misuse and unethical behavior.

Education is also emphasized as a key aspect of AI development and understanding. The establishment of a specialized AI school in Congo at all educational levels is cited as evidence of the importance placed on educating individuals about AI. This educational focus aims to provide people with a deeper understanding of AI and equip them with the necessary skills to navigate the rapidly evolving technological landscape.

The analysis suggests that AI development should be approached with careful consideration of risks and benefits, control mechanisms, and education. By adopting a comprehensive approach that addresses these elements, AI can be developed and implemented responsibly and sustainably.

A notable observation from the analysis is the emphasis on AI education for children. A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technology and AI. The program’s focus extends beyond making children technology experts; it aims to equip them with the necessary understanding and skills to thrive in a future dominated by technology.

Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future generations. The proposed solution involves providing options, solutions, and education on technology to empower young people and prepare them for the technologically advanced world they will inhabit.

In conclusion, the analysis underscores the importance of striking a balance between the benefits and risks of AI, ensuring accountability in AI control, and promoting education for a better understanding and access to AI innovations. By considering these facets, the responsible and empowering development and implementation of AI can be achieved to navigate the evolving technological landscape effectively.

Session transcript

Moderator – Prateek:
Good morning, everyone. To those who have made it early in the morning, after long days and long karaoke nights that all of us have been having here in Kyoto, welcome to this session on the launch of the report of the Policymakers’ Network on Artificial Intelligence, which was set up by the IGF. I would briefly mention the names of the esteemed panelists here before handing the floor to Mikey to introduce the Policymakers’ Policy Network a bit. We have Mr. Nobuo Nishigata from the Japanese Ministry as the representative of the host country with us. We have Mikey Sipinen, who is the editor for, and the, how do you say, editor for this report. With us we have Jose Renato, who is joining us from Brazil. We have Sarayu Natarajan, who is the co-founder of Apti Institute in India. We have Professor Xing Li from Tsinghua University in China. We have Mr. Owen Larter from Microsoft. And we have Jean-Francois Bombel, who is an expert on artificial intelligence and capacity building, I must say. And I am Pratik Sibal. I’m a program specialist at UNESCO. And I would also like to recognize our online moderator, Ms. Shamira. We don’t see her yet, but she will be also joining us in the discussion, especially on the work that she’s been doing on environment. And the two co-facilitators for this work that we have with us, we have Amrita Chaudhary and we have Odas with us, who are here. So Mikey, I’ll pass on the floor to you first to introduce what was the reasons for setting up this working group. How did the work progress? What is it that the multi-stakeholder community at the IGF was able to achieve? So, over to you.

Maikki Sipinen:
Thanks, Prateek, and a warm welcome to this early morning session to all of you also on behalf of the P&AI community. My name is Maikki Sipinen, and I’m the coordinator of P&AI. I’m not going to take too much time away from our expert panelists describing the process that led us here, but important to know that the P&AI is a really new thing. It’s only about six months old policy network, a toddler, should we say, and the P&AI was actually born from the messages of IGF 2020-2022 that was held in Addis Ababa last year. So, this is a nice example that the discussions we have here at the IGF meeting actually are very important and can result in concrete new things like the P&AI, so that’s quite inspiring. So, P&AI addresses policy matters related to AI and data governance, and we have today gathered here to discuss and debate and maybe even later challenge P&AI’s very first report. And for those of you who didn’t have yet the chance to have a look at the report, you can find a link to it in this session’s information page in the agenda. And what else? Well, many, many, many people have worked super hard to make this session and… especially to make this P&AI report come into existence, and especially our excellent drafting team leads, and I know they are listening in and joining this session online from different parts of the world. Some of them have woken up at 1 a.m. or 2 a.m. to tune in, so that’s a really nice example of the P&AI spirit. But I would like to hand it back to you, Prateek, to get us started with our expert speakers.

Moderator – Prateek:
Thanks, Maike. I can definitely attest to the fact that the working group, it’s in the true spirit of multi-stakeholderism at the IGF that this working group was formed, and then the way they’ve worked through open, first identifying what themes to cover through an open consultation, and then they’ve worked through streamlining through information meetings, inviting speakers to talk about different topics, and then collaboratively drafted this report. So congratulations to the authors, the lead authors, the team leads, and the others who contributed. So the report is available on the website of the IGF. I would encourage you to go through it. It’s a fantastic product of collaborative effort. In the first report that we have launched today, we have three themes. The first theme is talking about interoperability of AI governance, and this is primarily focusing on convergence and divergence among different regulatory initiatives with respect to artificial intelligence. So the group has mapped various initiatives in AI governance from the EU to China to the US to Latin America to Africa, and their intention has been to put forward countries or discourse that has not been so represented in the global discussions on AI forward. So they’ve centered a lot of the Global South initiatives in this report. The second theme covered by the report is they basically tried to frame the AI life cycle for gender and race inclusion. Some of the questions that they’re asking over there are, do AI systems and harmful biases reinforce racism, sexism, homophobia, transphobia in societies? These are particularly important questions that the researchers have focused on. And then finally, the third section of the report really talks about governing AI for a just twin transition. And when we say twin transition, it’s the digital and the environmental transition. And this section really explores the intersection of AI data governance and the environment. So having talked briefly about the report, I would first invite our host country representative, Mr. Nobuo Nishigata, to say some opening remarks and also perhaps contextualize a little bit the discussions around generative AI, which kind of prompted this reflection on artificial intelligence governance through a multi-stakeholder perspective. Over to you, sir.

Nobuo Nishigata:
Good morning, good afternoon, good evening to the online participants wherever you are. My name, thanks for the kind introduction. My name is Nobuo Nishigata from the Japanese government. I work at the Ministry of Internal Affairs and Communications. And I’m doing the division director there. And I joined this network in maybe July this year. And first of all, congratulations to you all to launch the report. Just say it’s a very young organization to do this, but frankly, I was very much impressed by the content of the report. So, and I understand that this work continues beyond this IGF. So we are looking forward to working with you together further. And then just a couple of things I’d like to mention, maybe just from the content of the current report. So I understand that this report is not a comprehensive analysis type of report. Rather, this is more like having the fresh angle to what we have for AI and what we have to do for AI policy development, those kind of things. Then just let me compare that with my previous work, since I used to work at the OECD in Paris, and I was in the team who developed the council recommendation on artificial intelligence in 2019. You are the first intergovernmental kind of policy standard at that time. Then I had four years of experience out there, and then compared to that report, for example, just Pratik introduced the three main themes on the report. The first one was the interoperability in AI governance. This kind of resonates with what we had at the G7. Japan hosted the G7 meeting this year. Then we had, in April, the ministerial meeting for digital and tech ministers. And then one of the major topics out there was the interoperability of the AI governance. Then for the G7 members, interoperability means that we know that in Europe, the negotiation for the agreement for the AI Act is taking place. On the other hand, actually Japan is the first country to propose the AI policy discussion in the G7 in 2016. It has been kind of getting a long history right now. but the G7 members continue to discuss on what we should do for the better AI, trustworthy AI, those kind of things. So then, getting back to the point of interoperability, on one side of this planet, the countries are working hard to establish new legislation on AI, but on the other hand, for Japan, we don’t think we need the legislation right now on AI. We need more innovation, we want to look at the possibility of what AI can do for us, because, for example, Japan is facing a severe problem of losing the population. So, we’ve already seen the decrease in the labor force in our country, so we need more machines to sustain our economy. So that was a point back in 2016, and we asked the G7 members to discuss further on AI, because we already knew that there could be some uncertainty or risks brought by that technology, while looking at the many, many opportunities there. So that’s the reason that we wanted to start the discussion, and it goes to the OECD, UNESCO, and many organizations right now. So, it’s a great turnaround, actually. Then, for this report, I would say it’s a much wider focus. I mean, not a focus, but a wider perspective. Many different perspectives on interoperability, and we can see some commonalities, but also the differences. This is a great point, but this network discusses AI policies through the global south lens, and this is a point that the G7 doesn’t have, actually. So, to me, it’s a very refreshing thing. And maybe about the third topic of this report, this is about, of course, it’s on the environment, but of course, it deals with the data governance, right? And then while I was at the OECD, then my colleague was just launching the recommendation on enhanced access to data and the sharing of the data. And to me, that recommendation made sense a lot, but on the other hand, the same thing, once we got the real case study within this report, then of course, we saw some similarity between the report and the case study and the council recommendation from the OECD, but on the other hand, we saw some difference. And this is just brought again by the Global South Lens, so this is great, and then maybe I should stop here. So then maybe just touch on, maybe I should be back on this point, but just flagging that this year, the G7 leaders, actually, I talked about the ministerial meeting, but the ministerial meeting, the declaration went escalate up to the leaders summit this year, and then the G7 agreed to establish what we call the Hiroshima AI process, and this is more focused on the generative AI, and just taking stocks. And as well as try to identify the challenges and risks, of course, as well as the opportunities brought by this new technology, so.

Moderator – Prateek:
Thank you, sir. So one key takeaway before we come to other panelists is that this report can inform some of the G7’s work, which is coming up from a Global South perspective. I think that would be a fantastic outcome for the work that has been done here. And I just wanted to, I’ll come back to the Hiroshima process in a bit. I wanted to open the floor a little bit on generative AI, and one of the issues the report talks about is around potential monopolization around this technology. And they raise questions around how can we make generative AI systems and development more open, transparent, accountable. And I wanted to come to you, Owen, to hear your perspective on how can generative AI systems be developed in a more open, transparent way, and what is Microsoft doing in this domain? For about three minutes.

Owen Larter:
Thank you very much, and great to be here. So I’m Owen Lata from Microsoft. It’s a pleasure to be here, and congratulations on such a thoughtful report, which I think really does hit on three of the really important issues that we need to get right with artificial intelligence. We need to make sure that we’re governing this globally. We need to make sure that we’re doing this in a sustainable way, and we need to make sure that we’re doing it in an inclusive fashion. We’re very enthusiastic about AI at Microsoft, as you can probably imagine. So the generative AI that you talk about we think is gonna be very powerful in helping people be more productive in their day-to-day lives. So we have our Microsoft co-pilots, which are helping people be more productive in using our Microsoft Office technologies. We also think that this technology is just gonna be a huge opportunity in helping people better understand and manage complex systems. I think you ask a really good question about how to make sure we’re building this technology in an inclusive fashion. And one of the things that we’re really mindful of at Microsoft is hitting these fairness goals, doing things in an inclusive fashion. So that starts by having really diverse teams at Microsoft that are building these technologies. So part of our Responsible AI program at Microsoft is our Responsible AI Standard. We have three goals in there, which are our fairness goals, F1, F2, F3. People can go and see our Responsible AI Standard, which is a public document that we’ve shared so that others can critique it and build on it. And a key part of these goals is making sure that we’re bringing together people from a diversity of backgrounds to build these systems. So people with research backgrounds, people with engineering expertise, people that have worked on products, people with legal and policy backgrounds, and people that have worked on issues like sociology, like anthropology, so we have a really diverse set of inputs into how a technology or a system is being designed. I think more broadly, beyond that, there’s a really big question on how we make sure that we’re having a sort of representative conversation around governance as well. So we’ve been doing some work at Microsoft to try and broaden the range of inputs that we get into our Responsible AI program. We have a Responsible AI Fellowship program that we’ve set up. We’ve been running this for about a year now, and this is really pulling together some of the brightest minds from across the global South working on responsible AI issues to help inform the way that we are designing the technology, but also designing our governance program. So we have fellows from Nigeria, Sri Lanka, India, Kyrgyzstan. These have been really rich conversations to hear about how others across the world are thinking about these technologies and how to use them responsibly, and so we look forward to taking that work forward.

Moderator – Prateek:
If I can press you a little bit on this point about openness versus closed AI development, we have seen several open-source initiatives, and there are several which are not. How would you weigh in on this debate?

Owen Larter:
Yeah, it’s a great question, and I think open-source is really important. I think open-source is gonna be really important to helping advance an understanding of how to use this technology safely. I think it’s also gonna be really, really important in making sure that we’re distributing the benefits of this technology in a broad way. I think open-source can play a really important role there. So we’re very supportive of open-source. We’re a big contributor to the open-source community. We open-source a number of our models. GitHub, which people might be familiar with, is a Microsoft company that has been a big open-source ethos in its spirit. I think there are some questions around the trade-off between openness and safety and security at a certain level, and I think really highly capable models, what sometimes people refer to as frontier models, which are sort of at the highest end of capabilities of what we have today or beyond, I think there are real questions there around whether it makes sense to open source those, or if you are gonna open source them, or to explore different ways of making these models available, perhaps having some kind of middle path where you don’t necessarily release the model weights, but you advance greater access to the technology. So I think attention there, but I think it’s really important that we appreciate that open source will be a really important part of the discussion going forward.

Moderator – Prateek:
Thanks, Irwin, for those thoughts. And I think perhaps this is something that is food for thought also for the next work plan of this group to think about open source models and how can that be integrated in the policy discussions further. Soraya, I wanted to turn to you also on this question around generative AI. And the report also talks about some of the potential risks and harms to democracy, human rights, rule of law, and so on. And one that I would like to focus on is disinformation and misinformation. Can you share with us what are the ways in which generative AI systems can be used towards spreading disinformation? And what could be some ways in which we could address this?

Sarayu Natarajan:
Thank you very much. Thank you to the audience for being here and the online audience. I’m assuming there are a range of competing factors for you, dinner, lunch, sleep. So thank you very much for being a part of this conversation. Congratulations also to the team that’s written the report. It’s been through most of it and it’s a fantastic report. It has a cadence and a thoughtfulness that comes from collaborative work. and it was absolutely wonderful to read, so congratulations on that. Delving into the specific question on misinformation and disinformation, I think it’s critical to understand first how generative AI can enable the creation of misinfo and disinfo. What generative AI does, or what the capabilities of generative AI can imply is that the cost of generating content, which is the base of misinfo or disinfo, is basically zero. So if you are capable of writing the right kind of query or code, it’s quite easy to generate information. As language models are available in very many other languages, this capability is also therefore available in several languages. So what has happened through generative AI is the reduction of the cost of content production to zero. The internet and digital transmission in general has reduced the cost of transmission to zero. So when you put those two together, there’s absolutely no friction to the production and dissemination of problematic content. And problematic content, I mean there are several typologies in the literature, I mean between misinfo and disinfo, there are differences, and the consequences of these can also be manifold. In terms of reduction, I think, or stemming or curbing misinformation and disinformation, I think the report, while may not specifically focusing on these areas, does put in place or talk about several approaches that could be used. One of course is thinking about AI or AI, generative AI is embedded in specific contexts and taking a very context-specific lens to the stemming of misinfo, disinfo. Which means understanding both the context in which this is generated, so is it corporate, is it the state? who’s responsible for the generation of information, and then also spending time to understand how this disinformation process works. But all of this, the takeaway I would say is that in the absence of broader protections embedded within the law, and I say this carefully, are conscious that misinformation, disinformation is a polemical topic in its own right. But the rule of law as a guiding frame within which any inquiry about how to stem this problem might be the right approach to start with.

Moderator – Prateek:
Thanks, Rahim, for that. Nobu, you’ve mentioned briefly the Hiroshima process and that it is going to focus on generative AI. We’ve heard that quite a bit over the past three, four days. Can you give us some specifics of what is it that it is looking at? What are the kind of principles? I don’t know if it’s advanced enough for you to share that, but can you shed some more light on what they’re going to come up with?

Nobuo Nishigata:
Maybe a couple points until now. I mean, that the process has not ended yet, and actually the G7 delegation taskforce teams, the very hard negotiation they engaged in and then tried to finalize a report back to the leaders by the end of this year. So, but still, though, as interim, maybe I can introduce that the G7 ministers agreed on the ministerial declaration for the interim. It just published in early September, I think the 7th of September this year. Then, like a couple of things, I mean, the discussion is more having more focus on the code of conduct from the private sector. That’s the first one. So it’s more like a voluntary things. but on the other hand, we have some discussion particularly just we are talking about the misinformation, disinformation type of things and then we have some discussion about the watermarking. Maybe they’re very aligned with what she said about the proof or just made by AI, those kind of things. So maybe we are in a good alignment, I would say.

Moderator – Prateek:
Thanks. Those specifics help. So I wanted to move to this major part in the report which is focusing on interoperability of AI governance and I wanted to turn to Professor Xing Li who has worked very closely on internet governance and Professor Li wanted to understand what can we learn from internet governance to inform AI governance when it comes to interoperability.

Xing Li:
Okay, thank you very much for inviting me in this panel and Professor Xing Li from China, from Tsinghua University and actually 30 years ago, China connected to internet and we tried to participate and get into the different level of management or governance and at a higher level actually, the government should need to permit this kind of access and from technical layer actually, a lot of things. For internet, if we took look after evolution of the internet technology, there is IETF, Internet Engineering Task Force. That’s exactly for the technical interoperability works and engineers as individual work on that. So and then there are some other things, for example, the number assignment which is regional internet registries and the names that’s ICANN and a couple years ago, there is IANA transition so that make it from US centric to the global playground. So actually now it’s chart GPT and AI and I’m actually very excited on that and I believe Generative AI is something maybe even bigger than TCP IP. However Take a look of this area. We don’t have IETF. We don’t have this kind of organization So maybe that’s something we should take a look at that and work on that So and another thing actually I feel for internet from original technology and the evolute and the invention of the WWW and other technologies and the people try to understand things and there is no blueprints for that for generative AI I have a feeling probably the regulation Get into too early. We need to have innovation space at least for academics and the technical group we have some innovation space. Otherwise, it’s very difficult to move forward and Inside the country probably is okay for create a innovation place Actually, I really want global as a global village academics can work together and to make things more exciting. Thank you very much

Moderator – Prateek:
Thank you, sir Touching a briefly upon some of the the ways of governing internet that you mentioned this report actually Also talks about under the interoperability dimensions three things interoperability at the level of substantive tools Guidelines norms and so on then interoperability at the level of mechanisms for multi-stakeholder engagement And finally also talks about agreed ways of communication which really talks about agree agreeing on definitions concepts semantic interoperability Jose I wanted to turn to you and When you read the report, what were some of the key aspects around, say, the recommendations around interoperability that struck out for you, and what do you think about that?

Jose:
Hello, well, thank you very much, Pratik. It’s a great pleasure to be here. Looking at the report, one thing that I identified is that I think that we need to, considering that it is a report made by Global South representatives for the Global South, I think that one thing that we need to advance is in understanding what are the movements that we have in our own region, and within this, understand exactly what are the policies that we are driving forward, what are the narratives that we are pushing forward. And I think that when we look into regulation, it is, I think that it is a great thing that we are not focusing just on what’s going on in the EU and other countries which, and regions, bloc countries which are leading this debate, but also to those within the Global South. And I think that the main thing that we need to advance is understanding what are the points that are missing in the discussion, and I think that the report touches upon some of these issues, but I think that, especially when we look at our region, the main thing is that we need to understand what are specific challenges that we have, and I would like to mention, for instance, issues related to labor. We are not touching upon yet the impacts that the development of these systems, that the industry, the tech industry as a whole is having on labor, and I’m not talking about just what will be the future of work, let’s say, what people working now in offices will need in the next few years so that they are not left behind, let’s say, in this. in the advancement of, after the advancement of these technologies, but also what is happening with those so-called gig workers. In Brazil, at least, I can say that there is an intricate link between what’s happening with the people working in delivery platforms and issues related to race, and this relates also to the survival, let’s say. In Brazil, the increase in the deaths by drivers, motorcycle drivers, has increased in, if I’m not mistaken, it was like 80% in the last 10 years, and one of the reasons that many scholars have been debating is the fact that we have new platforms that demand other kinds of times of delivery, which pressure these workers in an extremely different way. So I think that this is an issue that we need to tackle, especially in our region, and also I think that we need to go deeper in the debates regarding sustainability. We’re gonna, the report is just upon this theme, and I think that it advances a lot in issues like tackling techno-solutionism, techno-optimism, because this discussion goal was against, not to say merely, but the strict issue on energy consumption, greenhouse gas emissions, and et cetera. It is upon politics. We have seen, for instance, some leaders, let’s say CEOs of tech companies, say, talking about the issues in Bolivia as that happened with, after President Evo Morales was out, and supposedly interests regarding the minerals that Bolivia has, especially lithium. So I think that we also need to advance this. And maybe one last point that I would touch upon in this question so I don’t take more time is the debate on biometric surveillance or the use of biometric systems. especially for surveillance purposes within and in the borders of countries is another issue that we need to take seriously. And considering issues related to, for instance, talking about Brazil once again, the structural racism that pervades, that passes through the criminal system is just being automated and accelerated with the development of these technologies. And a tech fix on them won’t solve it. We need to start thinking seriously about whether we are going to establish moratoriums for the systems or especially banning, which is one agenda that we are having very strongly in Brazil that I think that we could talk about it in the next report, how civil society is pushing forward for the banning of the systems over there. And yeah, I would say that it is one of the main issues that we are currently debating there. Thank you.

Moderator – Prateek:
Thanks, Jose. So I think you put it three points, right? There is the data, which is coming a lot from the global south. There’s the workers who are working on that data. And then there is the natural resources. And these three elements also come out quite strongly in the report or the case studies that we have seen in the report. And important for global discourse. Now I would like to turn to you, Mr. Jean-Francois. You read the report and you’ve seen some of the key challenges that are being mentioned. One of the things that the report talks about is also capacity building. And how do you see, how can we strengthen capacities in the global south, for instance, for engaging? first with processes, multi-stakeholder processes on governance, but also on the development use of AI. Over to you.

Jean Francois ODJEBA BONBHEL:
Okay, thank you so much. My name is Jean-Francois Bomben. I come from Congo, Brazzaville, and I’m AI and emerging technologies expert in regulatory. I’m working with RPC, which is the authority of regulatory in Congo, and we are expecting many things from AI and generative AI, but also fear about what is inside the box, you know? AI seem like a black box with many things inside, and so we are pushing that by three points. So the first one is benefits versus risk, and the second one is about accountability with the controller, and the last one is about education. I mean, education is a big part in our strategy. We created a school specialized in AI, from elementary to graduate one in Congo to make sure that our kids to educate population in general and make sure that everyone can access to the technologies and to know what is coming in, and all innovation can change life and bring developments. So make sure that no one will be keep outside. of that technology, that’s all.

Moderator – Prateek:
Thank you so much. I would now turn a bit to the second section of the report, which is really focusing on gender and race. But before that, I would also let the participants here know I’ll open the floor in about five minutes for questions, so feel free to, if you have something in mind as well. So the report cites the UN Human Rights Council, which said technology is a product of society, its values, and its priorities, and even its inequities, including those related to racism and intolerance. Sorayu, I wanted to turn to you and understand, do you have examples of gender or racial biases in AI systems that have impacted individuals or communities, and at the same time, if you can also give another example where, which also, the report also talks about some of those, where AI systems have been used to actually combat gender bias that we have in society. So, over to you.

Sarayu Natarajan:
Thank you for that question. I think it’s a broad and difficult one, and I’ll try my best to do as much as I can. I mean, before we jump into the question of gender bias and gender in AI systems, and along with gender, other forms of biases, such as racial, language, et cetera, do creep into AI systems, and it’s hard to talk about them in aggregates, because they do have their own specific politics. But having said that, there are some commonalities in these forms of intersections. But if you want to pick one, go with one and go with the specifics. Sure, sure, thank you. Okay, so the, before delving into the question of bias itself, I think it’s important to tackle. very briefly the forms of injustice that generative AI systems might have. One of course is the notion of data, data injustice that emerges in the context of gender, race, language, et cetera. And I’ll probably tackle language. There is also the injustice of labor and Jose here did refer to it. But to imagine generative AI without talking about the labor of annotators who make generative AI in the sense that they label data, they annotate data, they categorize data in ways that are accessible to researchers and scholars and builders of AI is very critical. Rather than delving into specific examples of language or gender bias that AI systems perpetuate, let’s talk about generative AI and the labor of generative AI. A lot of this labor is done in the global south. So millions of workers through various forms, platforms, sometimes in the form of large contracted organizations, work on data sets and labeling data sets and this is applicable to generative AI and several other forms of AI. Now in order to label, and let’s say you’re labeling a car or a bus or a vehicle or language or gender or race, the categories within which you label are often created in the west, which is that the company that’s getting AI made is the one that’s asking you to label, I don’t know, like a llama or a cow, objects which are often unfamiliar to the people that are labeling. So the origin of bias in a certain way is of course the larger politics of how AI is made, but it also is mediated by very, very different but it also is mediated by very, very specific practices. Around language, around even English language, language as being an input into large language models. So I think in order to talk about bias, it’s important to talk about labor, labor supply chains, the way in which AI itself is made, the way in which labeling and labeling categories are created. Jumping into how AI might enable or mitigate bias, I think there are several examples, but one specific example, and rather much more a concerted effort that has happened over time is in the Indian context that our efforts to develop large language models in non-mainstream languages. Several of these efforts, fortunately or unfortunately, have been spearheaded by small organizations who work in specific communities. And these efforts might make some of the benefits of generative AI accessible to wider communities in the languages that they speak. So I’ll pause here and hand back to you.

Moderator – Prateek:
Thanks, and I’ll add to that. There are some questions online. I’ll add to that also that in, for instance, in Africa, there’s a research group working on low resource African languages called the Masakhane community. So if anyone is interested to work with them or join them or support them, please do check out, they’re doing some fantastic work to create data sets in African languages as well. I would like to also turn to folks online. I don’t see you here, but if there are some questions from our online moderator, Shamira. Yeah, if Shamira, you can pick one or two questions online.

Shamira Ahmed:
Yes, sure. I will go to the first question we got. Thank you, Pratik. Can you hear me?

Moderator – Prateek:
Yes, very well.

Shamira Ahmed:
So the first question we got, I’m just going there quickly. The first question we got. was what, from Prince Andrew Livingston, what international collaborations and agreements are needed to govern AI on a global scale?

Moderator – Prateek:
OK, thank you. Can we collect a few questions?

Shamira Ahmed:
Yeah. And then the next question, I’m not sure if virtual attendees can raise their hand and pose their questions directly to you as well. Let’s see if there’s a question in the chat, and then we come back for people who may want to take the floor later. OK. And the next question was from Ayalo Shebeshi. And they had two questions. The first one was, how can we approach both the negative and positive impacts of AI, especially in global South and developing countries that are replacing human jobs? And then the next question was, how can we manage standards and international regulation of AI initiated by international bodies, as part of the UN and other agencies, and make sure that there is full agreement by all countries and nations?

Moderator – Prateek:
Thanks. Thanks, Shamira. So we have three questions. Two are quite similar. But I’d request you to hold on a bit, because I want to collect at least two questions from the room as well. And then we address everything at the same time. Anyone in the room would like to take the floor, please? Yes, please, our colleague from the UN University.

Audience:
Good morning. Good morning. Jingbo from UN University. Actually, this is much more intimate so we can communicate. So my question is related to capacity building. We know that AI is less to do with engineering science and more to do with empirical science, which means AI is not like an engineering product that you design and you know what’s gonna happen. Even the researchers, even the designer don’t necessarily know what’s gonna happen. So along the way, they have to test, experiment to find out what are the risks, what are the potential benefits extra. So my question is related to the difficulty in capacity building because things keep changing. Even the designer don’t know where to go and meanwhile, there’s misinformation, there’s different sources of information and how do we build capacity? How do we teach, for example, the school children or even our peers, my grandmother, for example. How do we let them know, inform them of what’s going on? Thank you.

Moderator – Prateek:
Thank you so much. We have our colleague from EY. Sir.

Audience:
Hi, Ansgar Kuna from EY. In AI, as with a number of these digital technologies that are arising, we’re seeing a blurring between the lines of sort of the more political space of regulatory development and the technical space of standards development. Standards are becoming increasingly an instrument in the implementation side of the regulations. And so my question is around the capacity building of enabling a wider community of stakeholders to engage in that sort of technical side that has become an important part of the bigger policy instruments.

Moderator – Prateek:
Thank you so much. So we have five questions. I will not go to each one of you to answer because that will take us ages. Who wants to take the questions around governance and the governance framework? So there’s one set around governance and how to make AI governance globally works. So I see Oven wants that. And then there’s one set around capacity building, both at the technical level, but also going to schools and so on. And so first we go to Oven then. Yeah, over to you.

Owen Larter:
Sounds good, thank you. So I think this is a really important question to ask. How do we build a coherent global governance framework for AI? And I think it’s important to realize that there is a difference between having a sort of globally coherent framework and having identical regulation in every single country. I don’t think we wanna get to the latter. I think what we want to have is a set of principles, probably a code of conduct that sets a high bar globally, but then allows individual countries to take those standards and implement them in a way that makes sense for them. I really think on this global governance conversation, we’ve made an enormous amount of progress over the last year. We’re coming up actually to quite a significant milestone. On the 30th of November, 2022, you had the launch of ChatGPT, which really did change the conversation, it seems, amongst the public and amongst lawmakers around the use of these technologies and their impacts on society. I think the progress that has been made is really quite significant. I think you’ve seen that this week in the types of conversations that we’re having here. Very importantly, you see that in the reports that has been put together. I think the G7 code of conduct through the Hiroshima process under the Japanese leadership, very, very important in terms of advancing this global conversation around how to develop and use AI. I do think you’ve sort of got the building blocks in place now for a sort of longer term conversation around what global governance should look like. I think as we have that conversation, we should sort of take a step back and think about a couple of. things. The first is what ultimately do we want a global governance framework to do? And secondly, what can we learn from existing global governance regimes? And I think there’s probably at least three things that we want this framework of the future to do. The first is around standards setting. I think standards are going to be really important in terms of advancing this coherent global regime. I think there are great lessons to be drawn from organizations like ICAO, the International Civil Aviation Organization, part of the UN family, where you have a really broad global representative conversation with pretty much every country in the world participating in it to set global safety and security standards that are then implemented by domestic government. So I think that kind of standard setting piece is really important. I also think having a conversation, this was addressed in the report as well, around advancing an understanding and consensus around risks. I think this is a really important piece of the global discussion. You can look at organizations like the Intergovernmental Panel on Climate Change, for example, again, part of the UN family that I think has done a really good job of advancing an evidence-based understanding of the risks around climate. I think we should be looking to do something similar when it comes to AI. Then maybe the final piece that I’ll mention, and this sort of bleeds over into the capacity building conversation, is around building out infrastructure. So this technology is moving so quickly, it’s easy to forget sometimes that it is still relatively new. So the transformer architecture that underpins these large language models that are causing a lot of excitement and enthusiasm at the moment is only six years old, developed in 2017. There are an enormous amount of open research questions that we really need to continue to invest in and tackle. We need to provide the infrastructure to academics and researchers to be able to do that. So there are interesting proposals in the US, for example, for something called the National AI Research Resource. This is an idea of developing publicly available compute data and model that academics would be able to use to study these technologies more and advance our understanding around them. So, technology investment in the infrastructure. One piece I would really emphasize there is the importance of developing evaluations for these technologies. It’s a very difficult space with a lot of open gaps at the moment. We need to make some progress there. Then the final point I’ll make, and then I’ll stop talking, is around the social infrastructure as well. So, we need to be able to find sustained ways of having global conversations, building on, I think, the great progress that we’ve made this year and conversations like this, to have a really globally representative discussion around these issues that allows us to, quite frankly, monitor how the technology goes, keep track of things as technology progresses, and be able to adjust and be nimble with how we’re approaching these things as a global community.

Moderator – Prateek:
Thanks, Sobhan. I saw that, Jose, you wanted to comment also on the governance part, and then Nobu, I’ll turn quickly to you as well on how you see the global governance landscape evolving as well.

Jose:
Thank you, Pratik. When we discuss the global governance of AI, I have to admit that, at the moment in which we are, I am quite skeptical that we are, at this moment, going to advance something in this regard, in a way that does not mean a race to the bottom regarding what are the parameters that we have to govern these technologies and their impact. I mean this because we have an interplay of many narratives going on, which lead many countries, and like this, I’m talking about geopolitics, the geopolitics of these technologies and issues related to competition, to the supposedly AI race that exists, and which has been framed as something quite similar to what we had during the Cold War. So, I think that this is a huge barrier for us to overcome. if we want to have a reasonable sort of global AI governance regulation or whatever way we can frame this. And I think that especially if we consider the countries from our region, and I’m talking about the majority world, global south, I think that there are forum, there are some forum that we need to push forward this agenda in order to have our interests in play. And I’m talking about the BRICS, G20 to some degree as we have the presence of many countries from Latin America, from the African continent, from South Asia and et cetera. And this means a pressure on the global north because they are the ones who have developed these technologies who are pushing them forward and who are having their, especially their companies dictate the agenda of what is a tech worth of our attention or not. But if I were to pinpoint two points, and now I’m gonna make reference once again to the issues of labor and of the extraction of natural resources as it’s commonly called. I think that first of all, maybe I’ll tell you a story to illustrate this. In the beginning of the year, there was a genocide in the Brazilian Amazon of a specific ethnicity called the Anamames. And these people was being killed and had their territories invaded by gold miners, illegal gold miners. And afterwards, after a while, our federal police identified that one of the companies which was dealing with these illegal gold miners was selling gold to companies like Amazon, Apple, Google, and Microsoft. And so this is one thing, we need to deal with the issue. related to the expression of these resources and the impacts that they have in groups, which, and of course, when we’re talking about global governance of technology, we’re saying also that there are groups that seem to matter more than others. And I’d say that the anomalies in this case seem to be in the side of the ones that are less worried about. And so, in this point, I think that we need to start seriously thinking on how to deal with the materials that we are doing, with the lives that we’re impacting. And here, once again, and I think that the point related to also the clique workers and the ones who are helping develop these technologies who come from the global south, I think this is also a necessary discussion that we need to have in a global governance, be it through the control by workers of algorithms and of the decisions being made by these companies, or, and of course, better working conditions for them, and having the due responsibility of the companies who are in the higher edge of the chain to be responsible for what’s going on in these situations.

Moderator – Prateek:
Thanks, Jose. So, in a way, the point that Erwin was also making around accountability across the value chain and evaluations and so on should be part of the global governance frameworks and these evidence-based processes that are being talked about. Nobu, coming from a government, where do you see is the global governance of AI going? And what is your perspective on that?

Nobuo Nishigata:
Let me say that from the global governance, of course, that’s one single country’s government, and of course, we are looking at what is taking place in the global sphere, like here, OECD, UN, of course, and UNESCO, et cetera, or ITU. Then, like, to me… as a government person, I mean, I cannot write a code, honestly, but on the other hand, I can write the legislation in Japanese. This is my job, right? So then, you know, like we want to have some room to have our own, like, you know, for example, once we have the treaty in the top, of course, we respect the treaty, we sign it, and then we have to do something to be aligned with the treaty, right? So maybe, like, for this case, I mean, it’s, I’m not sure, it’s maybe too early, you know, I recognize that some, particularly the Council of Europe people are for, you know, working hard to get a framework treaty convention, and I was in some negotiation in the same way, but, so once we, for example, once we got some treaty on AI, but we don’t want to have the very strict treaty, because it’s a moving target. I mean, when it comes to the, like, a human right, or like, maybe for the war, etc., then it could be that way, but on the other hand, for this case of AI, then we don’t want to have the very strict upper hand, and then so that we don’t have any room to do our own, I mean, it’s not only for Japan, I would say it’s for every government, and every government workers out there. Then, so then, they’re getting some growth, so from the point of the OECD, then, I mean, I’m not studying the OECD anymore, because I graduated from there, but still, I mean, their principles are very simple, like, five value-based principles, like, you know, like some people said about accountability, yes, there is one, and explainability, and safety, security, robustness, those kind of things, and then this comes first, I mean, you know, the advanced AI systems dealing with the human, then we need a safety, right? Then, of course, the principle touches on the privacy, and fairness, and human rights issues, and then that’s it. Then, on the other hand, we have five more principles, but it’s more like a guidance to the government, like, you know, government has to work on the ecosystem, the government has to work on some skills or capacity building, or etc, like a regulation if needed, or maybe creating some testbed to facilitate their own work in every place in the world. And of course, in the end, as we see this perspective, then we want to have more collaborations between the countries, so then the last principle is about talking about international collaboration, both in the policies and the techniques and standards, etc. So, I mean, you know, there are a bunch of different international organizations, and each organization has a different membership, different mandate, etc. So, it should be there, you know, it’s natural to have the very various type of for example, recommendation principles, guidelines, etc. But still, I mean, the bottom line is not very different, I would say. I mean, just the thing that OECD was about only the first one, but still, I mean, you know, I don’t see that there are too many different things. I mean, you know, it’s more like a version of each organization, and they have to do it, because, you know, each organization is a different body.

Moderator – Prateek:
I can say that all the international organizations working on this are mostly coordinated, so, I mean, from UNESCO, we do work with the OECD, with the Council of Europe, the African Union, with the European Commission, to at least exchange where the work is going, because at the end of the day…

Nobuo Nishigata:
I mean, we can share the episode of the Pratik and I, I used to have the lunch over the river of the Seine in Paris, right?

Moderator – Prateek:
Exactly. So thanks for that. I want to turn now to the second kind of set of questions that we had around capacity building. So I’ll turn first to to Mikey, then to Sorayu, Professor Ching-Li, and to Jean-Francois. What is it that we need to strengthen capacities across different levels and for different things from development of technical standards to development of governance to just using AI in our daily life to detecting this information. We had a wide variety of capacities that were mentioned. So maybe each one of you can pick on some. Over to you, Mikey.

Maikki Sipinen:
So the audience questions were about capacity building as well as how might be enabled a wider community to take part in this AI dialogues and debates. And from the way I see it, they are sort of parts of one and same. So of course, we need to improve our efforts in introducing AI and data governance topics in schools and universities and training citizens and the labor force, at least in basics of AI. And there are many amazing initiatives to be found everywhere in the world. For example, in Finland, where I’m from, I think Finnish AI strategy managed to train more than 2% of the Finnish population in basics of AI just in under one year. So that’s a good benchmark on what’s possible if there is a well. And something else I’d like to highlight here is the capacity building of civil servants and policy makers since this is an area that would really deserve and require even more space in the AI governance discussion. I like to know what just the moment that gets said, like I can’t write the. I can write the regulation for Japan and this is exactly what we should all understand and appreciate that we need different kinds of AI expertise to come in and work together so that we can make this global AI governance happen so that it’s inclusive and fair for us. Maybe you already guessed that capacity building is my personal favorite topic under AI and earlier this spring we were kind of brainstorming and discussing with the P&AI community like which topics to select for a report because it’s quite obvious that not not all AI and data governance related thing can be included and covered in one report and I was kind of secretly hoping that someone would suggest this capacity building and I was a bit bummed when that didn’t happen, but over the past month I realized that this is sort of naturally interwoven in all of our report topics as well as for government. All the groups in the end navigated towards capacity building and included some recommendations or sentences of that. It’s really in the core of all our topics and I trust that in the coming years we will have more focus on capacity building global dialogues as well.

Moderator – Prateek:
Thanks, Maiki. Sorayu, you would like to take that?

Sarayu Natarajan:
Thank you, you’re absolutely right. There are multiple categories of capacity building, multiple groups that need to engage with AI technology, different types of AI technology in different contexts. So you’re absolutely right. right, there’s the ability of the population, citizens at large, to engage with AI and get the best out of it in a certain way, or at least not be harmed by it. Then there is the question of how do technical communities from different domains, the legal, policy, governance community on the one hand, the technical community on the other, how do they talk to each other? And maybe I’ll focus on that very briefly. I mean, my starting point on capacity building, adult education, whatever you want to call it, is the idea of mutuality, which is that both of these disciplines, both of these sort of empirical starting points need to be able to talk to each other in a meaningful way. Just as much as I have benefited from learning about embedding in large language models, I do think technical communities would benefit from understanding, for example, the politics of category creation. Given the empirical emphasis of gender to AI. Understanding non-negotiable human rights, the role of state vis-a-vis citizen rights. So having a sense of these is a mutual expectation and a mutual process. And I think that having various fora that enable these in a non-judgmental way, in a recognition of various empirical starting points is critical. The other, I thought I heard a question on the gains and harms of AI, including specifically on job loss. And right away, or? Please feel free to take that as well. Right, thank you. I think there was a question on some of the gains and harms of AI, and specifically focused, if I understood it correctly, on job loss. I do think it’s a genuine challenge, particularly from some forms of generative AI. I do think as a society, we are still starting to gather the evidence and understand how different forms of, different applications of generative AI might cause different. different consequences, particularly around jobs. The legal community, the tech community, the coding community particularly, are likely to be affected by the easy availability of the capabilities of generative AI. That is understood, but I think the degree and extent is to be better imagined. There is an ILO report that does say that the impacts of job loss are more likely to be felt in the global north. Consequently, the global south will actually gain from very specific types of jobs that generative AI will generate. And I think we’ll have to be careful and observe this a little bit more. It shouldn’t be that, in a certain way, the capacities of generative AI ends up further ossifying the barriers and the types of jobs that exist in different parts of the world, that it’s not some forms of click work alone that remain, and then some of the skilled jobs, particularly ones that relate to category creation, we want to go back to that point, remain with existing powers. So I think we have to keep watching this job loss question. And then, of course, humane conditions, just conditions for workers in different parts of the world. So I’ll pause there.

Moderator – Prateek:
Thanks. Professor Xing Li.

Xing Li:
Oh, okay. Capability beauty is also my favorite topic. Actually, people, I believe this creates, generic AI create opportunities, but also the challenges to the global south. And the people refer genetic AI to three factors, usually that’s the algorithm, the computing power, and the data. And actually, I would like to add another thing that’s more important, that’s education. That’s very, very important, that’s the human resource. And the traditional education sometimes need to be changed. I believe in this AI age, yes. Four things very important, the first is critical thinking. In the old days, student just follow what teacher said. However, if it’s a rabbit or AI, then you have to have the ability for critical thinking, that’s first thing. Second, everything should based on fact. Third, logical thinking. And the finally, but also very important, global collaboration. The people need, okay, the youngsters need to have ability in these four areas. I believe it’s important. So I really like to see the global AI related education system. That may be as many years ago, I mean hundreds year ago, that’s creation of the modern university. We need some kind of new educational systems in the AI age. One of the lady professor from Stanford University, Fei-Fei Li said, we need Newton and Einstein in AI age. Thank you.

Moderator – Prateek:
Thank you, sir. A plug for UNESCO colleagues working on education and AI. They’ve launched some guidelines on generative AI and education. So if you’re interested, feel free to check that out as well. I’ll move to Jean-Francois.

Jean Francois ODJEBA BONBHEL:
Okay, thank you. So I will switch to French, I’m more proficient in French. So I will speak to you in French. I would switch to French because I want to make sure that I can express myself correctly, my thoughts. In terms of this capability, skill sets, I think this is overall, and it is global. I think that this is something that we can implement, that we could put in place different training sessions, and then everybody would be on the same footing. And there are various perspectives which we could put forward as a teacher, as an educator, and also as a software developer. We have devised different processes for AI. I work with computers and I work specifically on governance. So what I do see, I understand the world that I live in, that I work in. I am working on a world, I am devising that world where my children are going to be living. I work with researchers, developers, and I also take a step back and I ask myself a question. Is it actually that world, that specific world that I want to think about, that I want to create design for my children, for them to be able to live there? To make sure, how am I going to be certain of that? Now with regard to capacity building, we have implemented a program specifically designed for kids, for children which work with technology at the age of 6 until the age of 17. And we thought about numerous questions and ideas. We said to ourselves, what do we want? Do we want these youngsters to become experts? Do we want all of them to become experts in technology? Or simply, are we going to be preparing them, are we going to be endowing them with the necessary skill sets for the life, for the world where they are going to be living in? Now with regard to the solutions that we are going to be providing for them, we want to have a multi-faceted solution for them. What is the environment that they are going to be living? Is it the teacher or the parent? What are the options that we are going to be bringing about? Of course, while there are sanctions or punishments of course, if you don’t learn this or that, you’re going to be punished. Now our children and us, ourselves, we live in a world where these solutions, there are multiple solutions. We are developing their cognitive skill sets so that they have options, they have solutions so that they can have different ways of resolving all of these problems. There are multiple solutions. So this is a facet in developing or the skill building that we are talking about here today. And it is part of it. It is part and parcel. We need to be able to equip our children with the necessary skill sets of AI that is aligned with numerous and multiple solutions. And this is the focus that we have been developing, devising and working together. I feel that this is the appropriate approach within the educational school environment.

Moderator – Prateek:
On the final segment, and I see we have only 10 minutes left. So we have Shamira, our colleague online, who has worked on the environmental aspects of the report. Shamira, would you like to share some of the key insights from the discussions that you’ve had around environment and data governance and some of the case studies that you’ve presented in the report?

Shamira Ahmed:
Sure. AI and the environment and data governance is quite broad. But because the focus of the report was on the data governance aspects, we focused on the data governance aspects at the nexus of AI and the environment. And in summary, our recommendations collectively offer a multi-stakeholder perspective from the global south. And as mentioned by the other speakers, to promote interoperable AI governance innovations that harness the potential of AI, we focus on the multi-dimensional aspects of AI. of data for sustainable digital development. And sustainable digital development is basically a way forward of leveraging digital technologies that also considers environmental aspects, economic aspects, and also societal aspects in one comprehensive Venn diagram, let’s say. And we also discussed addressing historical injustices. We advocated for a decolonial informed approach to the geopolitical power dynamic that some people have mentioned in, for example, the materiality of AI when we consider the value chains and of the materials that go into AI. When we consider the multi-stakeholder process, is it really representative? And are the standards made in consideration of innovation ecosystems or global self-institutional mechanisms and situations? So we also talked about inclusion and minimizing environmental harms as many of the other speakers have highlighted as well. So in summary, we highlighted that a just green digital transition is vital for achieving a sustainable and equitable future and the way that leverages AI in order to drive responsible practices for the environment that promotes economic growth, social inclusion, and essentially provide a pathway toward a more resilient and sustainable world that actually meets the contextual realities of the global South. Most of the panelists have summarized and captured the report quite succinctly. And as Mikey mentioned, we mentioned capacity building. We talked about geopolitics. We talk about. the environmental aspects of AI. We talked about the data governance aspects. We talked about interoperability and defining key terms. So I think it’s a comprehensive report and I learned a lot during the making of the writing report and it was a truly bottom-up multi-stakeholder process. Thank you.

Moderator – Prateek:
Thank you, Shamira, for sharing that summary and also some of the important work that you’ve been doing on environment. I would now really open the floor here, not for questions, but for any recommendations that we have from the audience here on what could be other issues that this group would explore and this is also an invitation to you to join. This is a multi-stakeholder policy network to join this group. So are there any folks who would like to share any recommendations or thoughts? Yes, sir.

Audience:
Oh, hi. My name’s Yeo Lee from World Digital Technology Academy. We do research, also training and material for education on AI. And Pradeep, you mentioned UNESCO will do more for AI training and education. Right now, for example, for our published book and textbook, we provide it to a lot of universities, particularly in China, developing countries, global thoughts. So in the future, UNESCO will have any process for us to contribute or will you do training just yourself?

Moderator – Prateek:
I will definitely come back to you on that bilaterally because the session is not on UNESCO, it’s on. on the Policymakers Network, but happy to share what we are doing and link you with colleagues in the education work, for sure. Thank you. Anyone else who would like to share any recommendations? And perhaps if there’s someone online as well, Shamira.

Shamira Ahmed:
Yes, there is someone online.

Moderator – Prateek:
Okay. So if there’s someone who would like to take the floor online, I believe.

Shamira Ahmed:
Yes, I think the host should give rights and the person who’s raised their hand should put their video on.

Moderator – Prateek:
Okay, while we wait, we can go to the floor here.

Audience:
Hi, Ansgar Kuhne from EY. I think an important aspect is going to be the question about how we do the assessment to test whether the systems are achieving what we want them to be able to achieve. And specifically the question that is often raised is how do we assess the non-strictly technical aspects around the performance, such as the challenges around how the system is actually operating in the context of places where they haven’t been built and whether they are having unintended consequences in that kind of context on, for instance, the workers in these environments or the way in which people are being categorized through these systems. So thinking through the assessment assurance process of how these systems are operating, especially on those non-technical properties of the systems.

Moderator – Prateek:
Thank you so much. So I’ll now turn back to the panelists for three keywords which are future oriented and what this group should look at. You’ll see the screens, we have four minutes, so you have only three keywords each. Maybe we start with Nobu on the other side.

Nobuo Nishigata:
Three keywords, maybe we have to continue about the picture for this forum, the global south, education, and then maybe harmonization.

Moderator – Prateek:
Thank you. Mikey.

Maikki Sipinen:
I choose the keywords inclusive, future, and ENAI.

Moderator – Prateek:
Thanks. Joseph. Oh, they’ve got, I would say, let’s keep up with the initiative, let’s include other things, let’s go further in the ones that we have already debated, and this kind of initiative in this forum is fundamental, and thanks for the IGF for that, I guess it was in this opportunity that this was all possible. Thank you. Soraya.

Sarayu Natarajan:
Global, not global south, workers, and rights.

Moderator – Prateek:
Thanks. Professor Shingley.

Xing Li:
Critical thinking and global collaboration. And global? Collaboration. Collaboration.

Moderator – Prateek:
Owen.

Owen Larter:
Three thoughts, I guess. One, sort of get concrete on capacity building and what we can be doing to drive things forward. Two, invest in evaluations, invest in evaluations, invest in evaluations, it’s a major gap. And across all of this, continue to bring together technical audiences with non-technical people that understand the socio-technical challenges of these systems as well.

Moderator – Prateek:
Thanks. Jean-Francois.

Jean Francois ODJEBA BONBHEL:
I would say innovation, education, and accountability.

Moderator – Prateek:
Thank you so much to our panelists here for your insightful thoughts, to the participants both online and in person here. We invite you to look at the report, which was, as again, mentioned before, developed in a multi-stakeholder manner. It’s available on the website of the IGF under Policy Network on Artificial Intelligence. This work will continue and you are invited to join and expand this community going forward. Thank you so much and have a good day. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

148 words per minute

Speech length

535 words

Speech time

217 secs

Jean Francois ODJEBA BONBHEL

Speech speed

149 words per minute

Speech length

783 words

Speech time

314 secs

Jose

Speech speed

172 words per minute

Speech length

1487 words

Speech time

518 secs

Maikki Sipinen

Speech speed

148 words per minute

Speech length

772 words

Speech time

313 secs

Moderator – Prateek

Speech speed

164 words per minute

Speech length

2907 words

Speech time

1061 secs

Nobuo Nishigata

Speech speed

167 words per minute

Speech length

1846 words

Speech time

664 secs

Owen Larter

Speech speed

211 words per minute

Speech length

1808 words

Speech time

514 secs

Sarayu Natarajan

Speech speed

177 words per minute

Speech length

1772 words

Speech time

602 secs

Shamira Ahmed

Speech speed

129 words per minute

Speech length

640 words

Speech time

299 secs

Xing Li

Speech speed

150 words per minute

Speech length

620 words

Speech time

248 secs

Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Yik Chan Chin

In thorough discussions concerning China’s data policy and the right to data access, correlated with Sustainable Development Goal 9 (Industry, Innovation, and Infrastructure) and Sustainable Development Goal 16 (Peace, Justice and Strong Institutions), China’s unique interpretation of data access has become a focal point. According to the analysis, the academic debate and national policy in China are primarily driven by an approach that interprets data as a type of property. This perspective divides rights associated with data into three fundamental components: access, processing, and exchange rights. It posits that these rights can be traded to generate value, as explicitly stated in the government’s policy documents.

However, this policy approach has sparked substantial critique for its disregard of other significant aspects of data access. Chinese policies predominately fail to recognise data’s inherent character as a public good. The academic sphere and governmental policy make scarce acknowledgement of this, undervaluing its potential contribution to societal advancement beyond merely commercial gains. Along these lines, the rights and benefits of individual citizens are often overlooked in favour of promoting enterprise-related interests.

The country’s data access policy is primarily designed to unlock potential commercial value, especially within enterprise data – an aspect contributing to the imbalance of power between individual users and corporations. Such power dynamics remain largely unaddressed in China’s data-related discussions and policy settings, potentially leading to a power imbalance detrimental to individuals.

Given these observations, the overall sentiment towards the Chinese data policy appears to be broadly negative. Acknowledging data’s essence as a public good and according importance to individual rights and power balances would be fundamental components for a more favourable policy formulation and discourse. The inclusion of these elements will ensure that the data policy reflects the principles of SDG 9 and SDG 16, aiming for a balance between enterprise development and individual rights.

Vagisha Srivastava

Web Public Key Infrastructure (WebPKI), an integral component of internet security, provides several benefits such as document digital signing, signature verification, and document encryption. This is epitomised by an incident involving a company named DigiNotar, which, through the misissuing of 500 certificates, compromised internet security – underlining the significance of digital certificates in web client authentication.

WebPKI governance intriguingly falls within the public goods paradigm. While the government traditionally delivers public goods and the commercial market handles private goods, in the case of WebPKI, private entities take noticeable strides in contributing; this defies conventional dynamics in the production of both public and private goods. That said, the government’s involvement isn’t entirely dispelled, with the US Federal PKI and Asian national Certification Authorities (CAs) actively partaking.

The claim that private entities are spearheading WebPKI security governance presents certain concerns. Governments may find themselves somewhat hamstrung when attempting to represent global public interest or generate global public goods in this complex context. As a result, platforms which are directly affected by an insecure web environment (such as browsers and operating systems) secure vital roles in security governance.

The Certificate Authority and Browser Forum, established in 2005, is crucial in coordinating WebPKI-related policies. This forum serves as a hub where root stores coordinate policies and garner feedback from CAs directly. In fact, its influence is such that it sets baseline requirements for CAs on issues like identity vetting and certificate content, since its inception.

Regarding the internal functionings of such organisations, the voting process within the consensus mechanism is diligently arranged prior to the actual voting process. Any formal language proposed for voting is already agreed upon, and the consensus mechanism is established pre-voting. Notably, there is curiosity surrounding how browsers, an integral part of the internet infrastructure, respond to such voting processes.

To conclude, internet security and governance system operate within a complex realm driven by both private and public actors. Entities like WebPKI and the Certificate Authority and Browser Forum play pivotal roles. The power dynamics and responsibilities between these players influence the continued evolution of policies related to internet security.

Kamesh Shekar

The in-depth analysis underscores the urgent necessity for a comprehensive, 360-degree, full-circle approach to the artificial intelligence (AI) lifecycle. This involves a principle-based ecosystem approach, ensuring nothing in the process is overlooked and emphasising a need for coverage that is as unbiased and complete as possible. The subsequent engagement of various stakeholders at each stage of the AI lifecycle, from inception and development through to the end-user application, is seen as pivotal in driving and maintaining the integrity of AI innovation.

The principles upon which this ecosystem approach is formed have been derived from a range of globally respected frameworks. These include guidelines from the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the European Union (EU), and notably, India’s G20 declaration. Taking these well-established and widely accepted frameworks on board strengthens the argument for thorough mapping principles for varied stakeholders in the AI arena.

The analysis also delves into the friction that can occur around the interpretation and application of said principles. Distinct differences are highlighted, for instance, in the context of AI facets such as the ‘human in the loop’, illustrating the different approaches stakeholders adopt at various lifecycle stages. This underscores the importance of operationalisation of principles at every step of the AI lifecycle, necessitating a concrete approach to implementation.

A key observation in the analysis is the central role the government plays in overseeing the implementation of the proposed framework. Whether examining domestic scenarios or international contexts, the study heavily emphasises the power and influence legislative bodies hold in implementing the suggested framework. This extends to recommending an international cooperation approach and recognising the potentially pivotal role India could play amidst the Global Partnership on AI (GPA).

The responsibility of utilising these systems responsibly does not rest solely with the developers of AI technologies. The end-users and impacted populations are also encouraged to take on the mantle of responsible users, a sentiment heavily emphasised in the paper. In this thread, the principles and operationalisation for responsible use are elucidated, urging a thoughtful and ethical application of AI technologies.

An essential observation in the analysis is the lifecycle referred to, which has been derived from and informed by both the National Institute of Standards and Technology (NIST) and OECD, with a handful of additional aspects added and validated within the paper. This perspective recognises and incorporates substantial work already performed in the domain whilst adding fresh insights and nuances.

As a concluding thought, the analysis recognises the depth and breadth of the topics covered, calling for further in-depth discussions. This highlights an open stance towards continuous dialogue and the potential for further exploration and debate, possible in more detailed, offline conversations. As such, this comprehensive and thorough analysis offers a wealth of insights and provides excellent food for thought for any stakeholder in the AI ecosystem.

Kazim Rizvi

The Dialogue, a reputed tech policy think-tank, has authored a comprehensive paper on the subject of responsible Artificial Intelligence (AI) in India. The researchers vehemently advocate for the need to integrate specific principles beyond the deployment stages, encompassing all facets of AI. These principles, they assert, should be embedded within the design and development processes, especially during the data collection and processing stages. Furthermore, they argue for the inclusion of these principles in both the deployment and usage stages of AI by all stakeholders and consumers.

In their study, the researchers acknowledge both the benefits and challenges brought about by AI. Notably, they commend the myriad ways AI has enhanced daily life and professional tasks. Simultaneously, they draw attention to the intrinsic issues linked with AI, specifically around data collection, data authenticity, and potential risks tied to the design and usage of AI technology.

They dispute the notion of stringent regulation of AI at the onset. Instead, the researchers propose a joint venture, where civil society, industry, and academia embark on a journey to understand the nuances of deploying AI responsibly. This approach would lead to the identification of challenges and the creation of potential solutions appropriate for an array of bodies, including governments, scholars, development organisations, multilateral organisations, and tech companies.

The researchers acknowledge the potential risks that accompany the constant evolution of AI. While they recall that AI has been in existence for several decades, the study emphasises that emerging technologies always have accompanying risks. As the usage of AI expands, the researchers recommend a cautious, steady monitoring of potential harms.

The researchers also advise a global outlook for understanding AI regulation. They posit that a general sense of regulation already exists internationally. What’s more, they suggest that as AI continues to grow and evolve, its regulatory framework must do the same.

In conclusion, the research advocates for a multi-pronged approach that recognises both the assets and potential dangers of AI, whilst promoting ongoing research and the development of regulations as AI technology progresses. The researchers present a balanced and forward-thinking strategy that could create a framework for AI that is responsible, safe, and of maximum benefit to all users.

Nanette Levinson

The analysis unearths the growing uncertainty and expected institutional alterations taking centre stage within the sphere of cyber governance. This is based on several significant indicators of institutional change that have come to the fore. Indicators include the noticeable absence of a concrete analogy or inconsistent isomorphic poles, a shift in legitimacy attributed to an idea, and the emergence of fresh organisational arrangements – these signify the dynamic structures and attitudes within the sector.

In a pioneering cross-disciplinary approach, the analysis has linked these indicators of institutional change to an environment of heightened uncertainty and turbulence, as evidenced from the longitudinal study of the Open-Ended Working Group.

An unprecedented shift within the United Nations’ cybersecurity narrative was also discerned. An ‘idea galaxy’ encapsulating concepts such as human rights, gender, sustainable development, non-state actors, and capacity building was prevalent in the discourse from 2019 through to 2021. However, an oppositional idea galaxy unveiled by Russia, China, Belarus, and a handful of other nations during the Open-Ended Working Group’s final substantive session in 2022, highlighted their commitment towards novel cybersecurity norms. The emergence of these opposing ideals gave rise to duelling ‘idea galaxies’, signalling a divergence in shared ideologies.

This conflict between the two ‘idea galaxies’ was managed within the Open-Ended Working Group via ‘footnote diplomacy.’ Herein, the Chair acknowledged both clusters in separate footnotes, paving the way for future exploration and dialogue, whilst adequately managing the current conflict.

Of significant note is how these shifts, underpinned by tumultuous events like the war in Ukraine, are catalysing potential institutional changes in cyber governance. These challenging times, underscored by clashing ideologies and external conflict, seem to herald the potential cessation of long-standing trajectories of internet governance involving non-state actors.

In conclusion, there is growing uncertainty surrounding the future of multi-stakeholder internet governance due to the ongoing conflict within these duelling idea galaxies. The intricate and comprehensive analysis paints a picture of the interconnectivity between global events, institutional changes, and evolving ideologies in shaping the future course of cyber governance. These indicate a potential turning point in the journey of cyber governance.

Audience

This discussion scrutinises the purpose and necessity of government-led mega constellations in the sphere of satellite communication. The principal argument displayed scepticism towards governments’ reasoning for setting up these constellations, with a primary focus on their significant role in internet fragmentation. Intriguingly, some governments have proposed limitations on the distribution of signals from non-domestic satellites within their territories. However, the motives behind this proposal were scrutinised, specifically questioning why a nation would require its own mega constellation if their field of interest and service was confined to their own territories.

Furthermore, the discourse touched on the subject of ethical implications within the domain of artificial intelligence (AI). It highlighted an often-overlooked aspect in the responsible use of AI—the end users. While developers and deployers frequently dominate this dialogue, the subtle yet pivotal role of end-users was underplayed. This is especially significant considering that generative AI is often steered by these very end-users.

Another facet of the AI argument was the lack of clarity and precision in articulating arguments. Participants underscored the use of ambiguous terminologies like ‘real-life harms’, ‘real-life decisions’, and ‘AI solutions’. The criticism delved into the intricacies of the AI lifecycle model, emphasising an unclear derivation and an inconsistent focus on AI deployers rather than a comprehensive approach including end-users. The model was deemed deficient in its considerations of the impacts on end-users in situations such as exclusion and false predictions.

However, the discussion was not solely encompassed by scepticism. An audience member provided a positive outlook, suggesting stringent regulations on emerging technologies like AI might stifle innovation and progress. Offering a historical analogy, they equated such regulations to those imposed on the printing press in 1452.

Throughout the discourse, themes consistently aligned with Sustainable Development Goal 9, thus underscoring the significance of industry, innovation, and infrastructure in our societies. This dialogue serves as a reflective examination, not just of these topics, but also of how they intertwine and impact one another. It accentuates the importance of addressing novel challenges and ethical considerations engendered by technological advances in satellite communication and AI.

Jamie Stewart

The rapid advancement of digital technologies and internet connectivity in Southeast Asia is driving the development of assorted regulatory instruments within the region, underwritten by extensive investment in surveillance capacities. This rapid expansion, however, is provoking ever-growing concerns over potential misuse against human rights defenders, stirring up a negative sentiment.

Emerging from the Office of the United Nations High Commissioner for Human Rights (OHCHR) is a report on cybersecurity in Southeast Asia, bringing attention to the potential usage of these legal legislations against human rights defenders. Concerns are heightening around the wider consensus striving to combat cybercrime. The general assembly has expressed particular apprehension leaning towards misuse, especially of provisions that relate to surveillance, search, and seizure.

What emerges starkly from the research is a disproportionate impact of cyber threats and online harassment on women. The power dynamics in cyberspace perpetuate those offline, leading to a targeted attack on female human rights defenders. This gender imbalance along with the augmented threat to cybersecurity raises concerns, aligning with Sustainable Development Goals (SDG) 5 (Gender Equality) and SDG 16 (Peace, Justice, and Strong Institutions).

The promotion of human-centric cybersecurity with a gendered perspective charters a course of positive sentiment. The protective drive is for people and human rights to be the core elements of cybersecurity. Recognition is thus given to the need for a gendered analysis, with research bolstered by collaborations with the UN Women Regional Data Centre in the Asia Pacific.

An in-depth exploration of this matter further uncovers a widespread range of threats, both on a personal and organisational level. This elucidates the sentiment that a human-centric approach to cybersecurity is indispensable. Both state and non-state actors are found to be contributing to these threats, often in a coordinated manner, with surveillance software-related incidents being particularly traceable.

Additionally, the misuse of regulations and laws against human rights defenders and journalists is an escalating worry, prompting agreement that such misuse is indeed occurring. This concern is extended to anti-terrorism and cybercrime laws, which could potentially be manipulated against those speaking out, potentially curbing freedom of speech.

On the issue of cybersecurity policies, while their existence is acknowledged, concerns about their application are raised. Questions emerge as to whether these policies are being used in a manner protective of human rights, indicating a substantial negative sentiment towards the current state of cybersecurity. In conclusion, although the progression of digital technologies has brought widespread benefits, they also demand a rigorous protection of human rights within the digital sphere, with a marked emphasis on challenging gender inequalities.

Moderator

Throughout the established GigaNet Academic Symposium, held at the Internet Governance Forums (IGFs) since 2006, a multitude of complex topics takes centre stage. This latest iteration featured four insightful presentations tackling diverse subjects ranging from digital rights and trust in the internet, to challenges caused by internet fragmentation and environmental impacts. The discourse centered predominantly on Sustainable Development Goals (SDGs) 4 (Quality Education) and 9 (Industry, Innovation, and Infrastructure).

In maintaining high academic standards, the Symposium employs a stringent selection process for the numerous abstracts submitted. This cycle saw roughly 59 to 60 submissions, of which only a limited few were selected. While this guarantees quality control, it simultaneously restrains the number of presentations and hampers diversity.

Key to this Symposium was the debate on China’s access to data, specifically, the transformative influence the internet and social media platforms have exerted on the data economy. This has subsequently precipitated governance challenges primarily revolving around the role digital social media platforms play in managing data access and distribution. The proposed model for public data in China involves conditional fee access, with data analyses disseminated instead of the original datasets.

One recurring theme in these discussions related to the state-led debate in China that posits data as marketable property rights. Stemming from government policies and the broader economic development agenda, this perspective on data has dramatically influenced Chinese academia. However, this focus has led to a significant imbalance in the data rights dialogue, with the rights of data enterprises frequently superseding those of individuals.

Environmental facets of ICT standards also commanded attention, underscoring the political and environmental rights encompassed within these standards. Moreover, the complexity of measuring the environmental impact of ICTs, which includes carbon footprint and energy consumption through to disposal, confirms the necessity of addressing the materiality of ICTs. The discussion further emphasised that governance queries relating to certificate authorities are crucial to understanding the security and sustainability of low-Earth orbit satellites, given the emergence of conflicts and connections between these areas.

Concluding the Symposium was an appreciative acknowledgement of the participants’ contributions, from submitting and reviewing abstracts to adjusting sleep schedules to participate. Transitioning to a second panel without a break, the Symposium shifted its focus towards cyber threats against women, responsible AI, and broader global internet governance. Suggestions for improvements in future sessions included clarifying and defining theoretical concepts more comprehensively, focusing empirical scopes more effectively, and emphasising the significance of consumers and end-users in cybersecurity and AI discourse. The Symposium, thus, offered a well-rounded exploration of multifaceted topics contributing to a deeper understanding of internet governance.

Berna Akcali Gur

Mega-satellite constellations are revising global power structures, signalling significant strategic transitions. Many powerful nations regard these endeavours, such as the proposed launch of 42,000 satellites by Starlink, 13,000 by Guowang, and 648 by OneWeb, as opportunities to solidify their space presence and exert additional control over essential global Internet infrastructure. These are deemed high-stakes strategic investments, indicating a new frontier in the satellite industry.

Furthermore, the rise of these mega constellations is met with substantial enthusiasm due to their impressive potential in bridging the existing gaps in the global digital divide. Through the superior broadband connectivity, vital for social, economic, and governmental functions, offered by these satellite constellations, along with their low latency and high bandwidth capabilities, fruitful benefits, such as optimising IoT, video conferencing, and video games, can be harvested.

However, concerns have been raised over the sustainable usage of the increasingly congested orbital space. Resources in space are finite, and the present traffic could result in threats such as collision cascading. Such a scenario could make orbits unusable, depriving future generations of the opportunity to utilise this vital space.

European Union’s stance on space policy, particularly the necessity of owning a mega constellation, demonstrates some contradictions. While a EU document maintains that owning a mega constellation isn’t essential for access, it is thought crucial from strategic and security perspectives, revealing a potentially contradictory standpoint within the Union.

Another issue is fragmentation in policy implementation due to diversification in government opinions, as demonstrated by the decoupling of 5G infrastructure where groups of nations have decided against utilising each other’s technology due to cybersecurity issues. With the rise in the concept of cyber sovereignty, governments are increasingly regarding mega constellations as sovereign infrastructure vital for their cybersecurity.

Lastly, data governance is a significant concern for countries intending to utilise mega constellations. These countries may require that constellations maintain ground stations within their territories, thereby exercising control over cross-border data transfers, a key aspect in the digital era.

In conclusion, the growth of mega-satellite constellations presents a complex issue, encompassing facets of international politics, digital equity, environmental sustainability, policy diversification, cyber sovereignty, and data governance. As countries continue to navigate these evolving landscapes, conscious regulation and implementation strategies will be integral in harnessing the potentials of this technology.

Kimberley Anastasio

The intersection between Information Communication Technologies (ICTs) and the environment is a pivotal issue that has been brought into focus by major global institutions. For the first time, the Internet Governance Forum highlighted this interconnectedness by setting the environment as a main thematic track in 2020. This decision evidences increasing international acknowledgment of the symbiosis between these two areas. This harmonisation aligns with two key Sustainable Development Goals (SDGs): SDG 9, Industry, Innovation and Infrastructure; and SDG 13, Climate Action, signifying a global endeavour to foster innovative solutions whilst advocating sustainable practices.

In pursuit of a more sustainable digital arena, organisations worldwide are directing efforts towards developing ‘greener’ internet protocols. Within this landscape, the deep-rooted role of technology in the communication field has driven an elevated demand for advanced and sustainable communication systems. This paints a picture of a powerful transition towards creating harmony between digital innovation and environmental stewardship.

Within ICTs, standardisation is another topic with international resonance. This critical process promotes uniformity across the sector, regulates behaviours, and ensures interoperability. Together, these benefits contribute to the formation of a more sustainable economic ecosystem. The International Telecommunications Union, a renowned authority within the industry, has upheld these eco-friendly values with over 140 standards pertaining to environmental protection. Concurrently, ongoing environmental debates by the Internet Engineering Task Force suggest a broader trend towards heightened environmental consciousness within the ICT sector.

The materiality and quantification of ICTs are identified as crucial facets to environmental sustainability. Measuring the environmental impact of ICTs, although challenging, is highlighted as vital. This attention underlines the physical presence of ICTs within the environment and their consequential impact. This primary focus realigns with the targets of the aforementioned SDGs 9 and 13, further emphasising the significance of ICTs within the global sustainability equation.

In parallel with these developments, a dedicated research project is being carried out on standardisation from an ICT perspective, involving comprehensive content analysis of almost 200 standards from International Telecommunications Union and Internet Engineering Task Force members. This innovative methodology helps position the study within the wider spectrum of standardisation studies, overcoming the confines of ICT-specific research and implying broader applications for standardisation.

Alongside this larger project, a smaller but related initiative is underway. Its objective is to understand the workings of these organisations within the extensive potential of the ICT standardisation sector. The ultimate goal is to develop a focused action framework derived from existing literature and real-world experiences, underlining an active approach to problem solving.

Collectively, these discussions and initiatives portray a comprehensive and positive path globally to achieve harmony between ICT and sustainability. Whilst there are inherent challenges to overcome in this journey, the combination of focused research, standardisation, and collaborative effort provides a potent recipe for success in the pursuit of sustainable innovation.

Session transcript

Moderator:
Yes, good. Okay, thanks a lot. Good morning, afternoon, evening, good night to many of the people here. Thank you very much for coming. This is the GigaNet Academic Symposium, which as tradition has been going since 2008 at the IGFs. 2006? Oh, the first one I did was 2008. Sorry, Milton. Thanks for the memory. Since 2006 at the IGFs, we’re very grateful to the IGF Secretariat for facilitating this meeting and getting us into this jam-packed program for this year with lots of exciting panels going on. We have a very exciting conference for you today as well, and happy to see so many faces in the room and quite a few people online as well. Just a bit of a summary of the background for this year’s symposium. We were set up, we were informed of the date for the IGF, and then went through our rigorous academic procedure for selecting abstracts that emerged as a result of a call. We had 59 or 60 abstracts submitted to the workshop, and we were able to accept only a small number of these. Thank you very much to everybody who participated in this whole process of submitting an abstract. Thank you very much to the members of the program committee who actually spent time reviewing the abstracts. It’s not easy to do this, so thank you very much. much to you as well. And thank you to the presenters for actually making their way here or staying up very late or getting up very early during today. Since we’re running a bit late, I’ll cut my presentation there. But I’m also, sorry, I don’t know, somebody else, did somebody else want to say something? No. I’m also acting as the chair and discussant for the first panel, which is taking place right now. We have, and I’ve made a list, we have four presenters, two of whom are here in the room, and two of whom are on site. We will start with a paper by Yik Chan-Chin from Beijing Normal University. Yik Chan-Chin is also a member of the steering committee for the Giganet Association. So Yik Chan-Chin will be talking about the right to access in the digital era. Then afterwards we have a paper that will be presented by Vagisha Srivastava on WebPKI and the private governance of trust on the internet. Vagisha is from Georgia Tech. And then the third paper will be on internet fragmentation and its environmental impact, a case study of satellite broadband, which will be presented by Berna Akaligur, who is from Queen Mary University in London, and she’s sitting on my immediate left. And then the last paper presented in this panel will be on ICT standards and the environment, a call for action for environmental care inside internet governance, which will be presented by Kimberly Anastasio, who is at American University. in the US and online. OK, without further ado, I will pass the floor to Yik Chan. You now have your 10 minutes to describe your paper. And we’ll move into the next paper immediately after that. OK? Thank you very much.

Yik Chan Chin:
OK, thank you very much, Jim. Can you hear me? Can you hear me now? Hello? Yeah, OK, thank you. Yeah, this is Yik Chan from Beijing Normal University. But actually, I’m in London. So this is a 2 AM morning in the early morning of London time. OK? So my presentation actually is about right to data access in the digital era, the case of China. And first of all, I would like to contextualize this debate of data access in the Chinese context. So first of all, we talk about why the debate of the access, collection, and dissemination of data become the center of the academic debates and the policymaking in China. Because there are three factors contribute to this discussion. First of all, is the internet and the data perceived as the important driving force for economical development in China. Secondly, is the rapid development of platform economy. And also, the mass production of data has raised the governance problem in the storage, transmission, and the use of data in China. Certainly, it’s because the role of a digital social media platform in data access and dissemination has strengthened the public demand of governments to act on the protection of right to information in China. So for those reasons, the academic debate and the national policy of access to data become where we hit it, the center of the policymaking in recent years. years. And also we found that the conceptualization of right to access to data in China and the formal informal rules, which is related to the legitimacy of the right to the public epistemic right to data is quite interesting. So that’s why I focus my study on the relation between the digital access, the right to access to digital data and the epistemic right. And so the data I use in this paper, including the national government’s policy regulation, and also secondary data. And so what is the epistemic right? So this is a right actually closely linked to the creation and the dissemination of law. It’s not only about the informed, but also about being informed truthfully and understanding the relevant of information and acting on it is based for the benefit of themselves and the society as a whole. So this is a concept, a start with, and also the epistemic of right is emphasized under equality, such as equality to access and availability of information, the knowledge equality in terms of obtain critical literacy in information communication. So, and also we need to understand the concept between data, information, and knowledge that interrelated concepts. So data is a set of symbols and kind of the representation of the Royal factors, but the information is organized the data and the law is understood, understood information. So these three concepts are interrelated. So therefore data is the form of law is create social process. So it’s a kind of a social constructed. So therefore it’s interesting. to see how the different social agents participate in the construction of the access right to data as a part of the equation of the social knowledge. So in my paper, I define different type of data such as data packet and price commercial test. And so I’m not going to details because of time limitation. So I define the right to access to data including two element. The first element is a right to access to public information, which is recognized as individual human rights by many jurisdictions and human rights body. The secondary is inclusive right for all member of society to benefit from the availability of data. So this is a mighty definition of access to data in my paper. So at a global level, there’s a different debate about a right to access to data. For example, we got academia recognize the data as a, it’s not a public good, but a leverage resource. But we also get other academic like Victor from Schumpeter from the OII in Oxford, which defined the data as a non-rivalry information and a public good. So it’s open to open access. And we also have like a European commission and the World Economic Forum. They provide different strategy to how to access to data including data access for all strategy or like what the economical platform they want to create a data marketplace service provider. So therefore the right to access to data can be traded in an open, efficient and accountable way. So therefore it’s tradable, data is tradable and they can be managed by a platform. provider, where European Commission’s approach is more like data access for all strategy, and the business to governments data access have a different requirement, and also creation of common European data space for important areas as well. So, look at the Chinese debate. The Chinese debate is interesting because they never treated right to access to data as an independent right, but as a part of the right to information, and also treated as a data as a property right in China. This data is not treated as a kind of right to information, or no, it’s not treated as an individual right, but it’s treated as a right to information or as a property debate in China. So, in terms of the public information, if the data is owned by government, so there’s a different approach. One is that this is the public data, which is public good, it’s owners, it belongs to all people. So, the second approach is the data, public data should belong to the state, and the non-public data, like a personal data, should be subject to personal protections. But there’s no debate about what is the right to access to the personal data, it’s not explicitly discussed, and also the equality nature of the epistemic right, such as equality to access and availability of information and knowledge has not drawn much Chinese debate attention as well. So, data access right, therefore, in China is treated as a property discussions. So, they want to formulate a trading system, so that the data can be traded to generate value, and so that is their approach. And this kind of definition… information actually is also triggered by the government policy project and the utilization of big data. So therefore Chinese academic debate are heavily policy driving in this sense, because this is the debate and that the position of the Chinese academia is heavily triggered by the government’s policy and there is policy driving as well. So very few of the academic debate actually support public good nature of data and the support of data sharing should be the default position and the control of access to data require justification because data is a natural public good. So therefore we can see the debate is pretty different from the other side in the global law. So there is the policy, the Chinese government’s policy regarding to the data, how the access of the data. So from 2015 to 2020, there’s a different action plans and the big data development plans. And also the most important plans is opinions on building a better market allocation system and the mechanism for factors of productions. And also this building a data space system for the better use of data as a factor of production. So basically the policies defined data property right consists of three rights. And so they treat data as a property and the data has a property right. But this property right consists of three rights. So like one is the access right, process right and the exchange right. So the property right is divided into three rights in the Chinese context. So, and in here we want to look at the definition of the how. How do they provide the right to different data? For example, the public data, this is the data generated by the party and the government agents, enterprise and institution in performing their duties or in providing public service. So the access of policies strengthen data aggregation and sharing. So you can access to this public data, but you need to authorize. And also there’s a conditional fee access, but also for particular data, you have to pay for it, okay? And so, but the public data is not to be accessed directly. They must be provided in the formal of models, product or service, but not in original data site. You cannot access to the data set, but you can access to the model or product or service, you know, generated based on the public data. So the second is personal data. The personal data is about personal informations. And so they have process, a process they can collect, hold, host and use of data with the authorization, but those personal data has to be normalized. So to ensure the information security and the personal privacies. So protection of the data, right to data subject to a copy and the transfer of the data generated by them. So you have a right to access this personal data, but you can only obtain a copy or transfer the data generated by the platform to other platforms. So this is the right offered to access to personal data. But for the enterprise data, data collected in the process by the market in production and business activities, they do not involve personal information or public interest. So they recognize and protect. the enterprise right to process and use of data and protect the right of data collector to use data and obtain the benefit, protect the right to use data or process data in commercial operations. And they also regulate authorize of the data collector for third party. And the original data is not shared or released, but access to data to analyze as shared. So government agents can also obtain enterprise and issue data in a coordinate of law and regulation in order to perform their duties. So this is the right to access to enterprise data. So the conclusion is that, first of all, access to data is not a defined aspect of the epistemic right. But in the Chinese context, they have a different but also similar interpretations. So because of the epistemic right in the Western academic literature, they are more from the sociological nature of the creation and dissemination of information knowledge. So the right are underpinned by the normative criteria of equal access, the availability of information and knowledge and the use for the benefit of the individual and the society as whole. So data is treated as a form of knowledge. It’s a non-verbal information good, a public good for the benefit of all. So therefore, open access and sharing of non-confidential data is proposed. So in the Chinese context, epistemic right has not yet drawn any attentions of Chinese academic debate. So the close related concept to the epistemic right is the right to information. But this right to information is approached from a legal perspective rather than from the sociological perspective. So they are stressing on the commercial right and also the public right to information right to the public data. So data is defined as one kind of factor of protection for the national economic development. So this is very tricky, because in the Chinese context, data is defined as a factor of production. You know, there’s a four factor of production, like a labor, lands, you know, and the capitals. But the data is defined as a fifth element of a factor of productions, which is very unique. So, and so therefore, the data is, has a non-variable and the non-sort. So they recognize data is a non-variable, and so it’s a character, you know, but they do not, data cannot be circulated in the market and like a lands, labor, and the capital. But the public good nature of the data has not been recognized in the mainstream academic publication of the government’s policy. So before, because of this, the public good or eco-assets dissemination of data are not mentioned in the public policy making. So under this kind of permiss, so therefore, data collection, analyze, and the process are aimed at unlocking the potential commercial value of data, especially for enterprise data, and define the various kinds of the data are focused on what then we go debate and the policy contextualizations. So therefore, the data, you know, in the data debate, in the Chinese policy and the academic debate, the focus is the right and the interest of data enterprise and not the individual right. The power imbalance between the individual and the corporation and the sharing of the benefit derived from data with the individual user and the data subjects has not been addressed. I think that’s all my presentation, the argument of my paper, thank you very much.

Moderator:
Thank you, thank you. And very close to time, so. we’re starting off well. We’ll move. Thank you very much, Yik Chan. I hope you can stick Okay. Okay. Okay.

Vagisha Srivastava:
Hello, everybody. Good morning. I’m going to talk about Web Public Key Infrastructure and the Private Governance of Trust on the Internet. I know it’s a cool title. All credit goes to Dr. Milton Mueller here in the audience, Dr. Carl Grindle, who couldn’t join us in person but is likely joining us virtually. Hi, Carl. And me, the lovely Ph.D. student you would want to have around. And of course we would like to appreciate the generous grant from the ISOC Foundation for this research. I am going to tell you the story of Internet security. And like all good stories, this one, too, starts with a tragedy. There was a security breach. A company called DigiNotar misissued 500 certificates on the Web. It was later identified as a man in the middle attack by Iranian hackers. But the company took no action for two months into the breach until the Dutch government intervened. I was only found out because one of the misissued certificates was for Google. So what is DigiNotar, what’s a certificate, and why is the story a significant plot point to what I’m talking about today? Okay. So we are all familiar with this. When a user types a Web address on the browser, the little lock sign tells us that the connection is secure. HTTPS that we have heard about enables a secure channel for communication. But the browser still needs to identify if the server or the client is in fact who they are claiming out to be. And it is done using digital certificates. Digital certificates authenticate clients on the Web. Certificate authorities or CAs issue certificates to website operators upon request. So issuing certificates, they’re supposed to verify the identity of the entity making the request. And the certificates then act as a recorded attestation. that the holder is, in fact, who they’re claiming to be. WebPKI is a web-based component that supports documents digital signing, signature verification, and document encryption, such as the certificates using the public key cryptography or asymmetric cryptography. Now, before we get into the details of that, let me first lay out what we are trying to do in this paper. There’s a bunch of literature out there from the technical community and the workings of WebPKI and the security or lack thereof, provided by certificates and certificate authorities. We’re looking at it from the governance perspective. We’re questioning the commonly held notions of public good and its delivery mechanisms. Public good is often provided by the government, private good provided by the market. And we are situating the governance of WebPKI specifically within the framework of public good being provided by private actors. We argue that the production of public good and some non-public goods require collective action, but not necessarily state action. Governments are but one vehicle of providing these goods, not the only one, and definitely not the most efficient one out there. The paper offers an innovative perspective on the dynamics of public production of private goods in the context of internet security. OK. One more slide. We use the framework of institutional analysis. We identify the public good in question. Then we identify the stakeholders, talk about if they cooperated or compete to achieve the said public good. Did they overcome the known barriers to collective action? We then describe the rules within which these stakeholders group institutionalized, and finally use some data collected to assess the efficacy of the institutions in achieving the desired result. That is the enhanced security. OK. Shifting gears again. If there are only two parts. parties communicating over the web. As long as these two parties can authenticate each other, the adoption and the use of encryption on the public web does not require any special form of institutionalized collective action. The hard part is the authentication process when there are multiple servers and multiple clients required in the process. It requires a reliable and trustworthy mapping of the private key holder to the public key. In the WebPKI ecosystem, digital certificates facilitate this mapping. When a server presents its digital certificate, which includes a public key, during a secure connection setup, the client can verify a certificate’s authenticity and trustworthiness. Split key cryptography eliminates the need to transmit private keys over insecure networks. However, it also creates an impersonation problem and certificates solve it for the web. A mismatch between the two, that is the private key and the public key, enables a man-in-the-middle attack. That is something that we saw with the DigiNotar incident. But this brings us to the question, how do we trust a CA to not be a bad actor or, worse yet, a compromised actor? There’s a chain of trust that enables us to trust a subsequent CA, where each subsequent CA or the intermediary CA has to comply with a certain set of policies set by the browsers. The endpoint, a root CA, is maintained in a root store by browsers and operating systems and has to go through a complex vetting process to be included in the root store program. Now, we have established the authentication is public good. Let’s spend a minute understanding why collective action is required. The web ecosystem as a whole needs effective authentication across the board. Security is not a private good, because a compromised certificate or a certificate authority has the potential to affect any website or any users across the system. alone have the incentives to provide it themselves and can be motivated by several factors. Browsers and operating systems cannot be responsible for screening every single website on the web. The digital ecosystem depends heavily on the trust to work and so needs authentication mechanism to be applicable everywhere and does need collective action to be enforced. Who are the stakeholders in the ecosystem? We identify four. Security risks are most concentrated on the top at root stores in the browsers and they have diminishing systemic effects as you go down. There are hundreds of certificate authorities, millions of subscribers who get the certificate and billions of end users and individual devices who rely on these certificates for authentication. According to Mankar Olson, collective action is costly. The coordination and communication costs and the bigger the group, the more the costs rise. The institutional solutions to the collective action problem for WebPKI focus on the top of the hierarchy, that is the browsers and the CAs, and it does not try to directly involve subscribers. The root stores act as a proxy for the end users and the certificate authorities act as a proxy for subscribers. We identify three institutional vehicles, the main character of our stories here. The certificate authority and the browser forum, which we’ll be talking about in a minute in more detail. The certificate transparency, which is an Internet security standard for monitoring and auditing the issuance of digital certificates through decentralized logging and ACME, or the automatic certificate management environment, which is a communication protocol for automating interactions between the CAs and their user servers. Okay. So the certificate authority and the browser forum. Remember the DigiNode slide? Well, I might not have been completely honest when I said that the story started with that tragedy. It started a little bit before that. Narrative privileges. From 1995 to 2005, certificates were being issued with virtually no standardized governing rules in place. The Certificate Authority and the Browser Forum was founded in 2005. But in 2012 was when the forum started actively making rules for the system. Since 2012, the forum has produced a set of baseline requirements for CAs that tackled convergence of expectations between the browsers and the CAs on issues such as identity vetting, certificate content and profiles, certificate revocation mechanism, algorithms and key sizes, audit requirements, and delegation of authority. The baseline requirements have been revised about every six months by means of formal ballots approving amended text. The second part of our methodology involves studying the CA Browser Forum in particular. We first collected data from the forum meetings. This included attendance records, meeting minutes, and we also had 10 semi-structured interviews. To capture the market share, we used random sampling to sub-sample around 2 million domains data from the Common Crawls database. The CAP Forum was described by one of our interviewees as a place for the root stores to coordinate their policies so that they don’t create conflicting policies and to get feedback from the CAs directly. While we do see on the chart that there are more European members than US member organizations within the forum, US participants are more active when it comes to participation. We tracked the activity of different stakeholders in the forum, and you can see an increase in participation following 2017, which was because of an addition of a new working group. Between 2013 and today, the browsers have become. And we also note the active participation of the U.S. federal PKI. There are many economic conflicts of interest between the browsers and the CAs, but we can see from our analysis of the voting records that in 92% of the ballots, a majority of both stakeholder groups supported or opposed a proposal. In only 2% of the cases did the browsers favor a proposal that was opposed by a CA. We see this data was collected in February 2023. We see that Let’s Encrypt, which is a civil society effort to encourage the use of encryption and to automate the issuance of certificates, is dominating the market. This didn’t used to be the case before, but it clearly showcases how automation has led to increased adoption of DV certificates. If externalities caused by poor CA practices are the main drivers of collective action, we should expect to see the gradual homogenization of the root stores across browsers produced over time. We do see substantial overlap in which root certificates and the browsers admit the root certificates into the trust store. We also see a gradual reduction in the number of trusted certificates within the root store over time. The measure of efficacy is interesting here. We see that the encryption and the web has increased significantly over time. We also observed that misissued certificates have decreased over time. However, while the global misissuance rate is low, this is predominantly due to the handful of large authorities that consistently issue certificates without error. The three largest CAs that we identified in the market share, Let’s Encrypt, Cloudflare, and cPanel, signed 80% of the certificates in the data set and have near zero misissuance. mis-issuance rates. Now, perhaps the most important finding, why have private actors taken the lead in security governance in this case? Well, governments are politically structured such that they cannot represent a global public interest or produce global public goods easily. The authority is fragmented, and there are numerous rivalries among them, especially when it comes to cybersecurity. The platforms have a greater alignment with the security interests of their users than national governments. An insecure web environment hurts their business interests, while governments are not directly harmed. Also, they often have a strong interest in undermining encryption and user security for surveillance purposes. The implementation of WebPKI involves an elaborate web of technical interdependencies. Security measures impose costs and benefit upon all four stakeholder groups. Those directly involved in the operation and implementation of WebPKI standards are in a better position to assess the cost and the risk of the trade-offs and make rational decisions. But the government is not entirely absent. We see USFPKI organizations participating actively. We have observed from the meeting minutes and the interviews that EU is pushing for the EIDAS regulation on this ecosystem. We also see involvement of national CAs, mostly from Asia, representing the interest in the forum. Now, why should you care? Well, a lot of times when insecurities happen in the web, the blame is often put on the users, because they engage in unsafe practices. You remember the always proceed option that is available for the certificate mismatch? This is not a perfect system. There are still compromises that can happen because of misaligned incentives, or just oversight because of redundancy of the process. In some cases, these could be intentional, example selling of backdated certificates. But it’s always better to know a little bit more. All of the good things about the internet relies on this ecosystem. For example, in- Ensuring cat photos are linked to a secure server, which they claim to be when you’re surfing the web. The topic is also understudied within the Internet governance community and hence would be of relevance to the scholars present in the room. Okay. So like all good stories, this doesn’t end here. We will possibly have a bunch of sequels. We are planning to do a study about how governments intervene in the system, get in more detail. We also would like to have the measure of effectiveness of certificate transparency that we mentioned in the institutional vehicle part and maybe check out if the impact of automation or the ACME on the system. That’s all. Thank you very much.

Moderator:
Thank you very much, Vagisha, you, again, you did the timing very well. Thanks a lot. We now move to Berna . I will just set up your slides on my computer and in the meantime, I give you the floor. Okay. Do you need your computer? Okay. Thanks. Thanks. Thanks. Thanks.

Berna Akcali Gur:
Thank you, Jamal. So this paper is one of the outcomes of our research project funded by the Isaac Foundation. global governance of Leo Satellite broadband. In that project, we focused on the jurisdictional challenges to the integration of mega-satellite constellations to the global Internet infrastructure. So the report resulting from that study can be found on the website that I’m sharing in our PowerPoint. There you can see the link for a separate ISOC project on Leo Satellite connectivity. The ISOC group assessed the subject from a purely policy perspective, and we joined our forces at times, and I recommend the report as well. Now as we were conducting that study, the satellite broadband industry picked up pace. More and more applications have been filed at the International Telecommunications Union for new mega-constellation projects. While there’s a certain degree of excitement about them, the scientists studying space and astronomy have raised their voices about the impact of these projects on space sustainability and space environment. So we decided to analyze the tension between the competing interests that are universal broadband connectivity for sustainable development and cyber sovereignty on one side, and sustainable use of space resources on the other. Of course, from a law and policy perspective, which inspired this paper. It’s still a draft, so we welcome any constructive feedback. So what is new about satellite connectivity? We all know that space technology satellites have long been a complementary part of the global communications infrastructures. Most often, they have provided last-mile solutions in remote and sparsely populated areas, such as islands or villages and mountains, because these areas are not easily served by terrestrial networks. And also, we shouldn’t forget, we still use them when we are in transportation, such as ships and planes. So communication… satellites are not new. The idea of multi-satellite systems, the constellations, are not new either. Earlier constellations in the low Earth orbit had emerged in the 90s. Orbcom, Iridium, Globstar are examples. These consisted of smaller number of satellites and they provided speech and narrow band data. They were not viable businesses for mass consumption. They were expensive projects and they couldn’t compete with the speed and capacity of the terrestrial networks, so they didn’t really receive much attention. Recently, advances in communications and separately space technologies, dramatic reductions in launch costs, financing by the technology sector, and most importantly, the ever-growing broadband demand, drove a second wave of satellite constellation ventures. These are very ambitious projects with increased number of satellites. Some leading examples are 42,000 satellites planned by the US venture Starlink, 13,000 satellites planned by the Chinese venture Guowang, and 648 for UK and India venture OneWeb. So newness in the sense is in the scale of these projects. So how do these ventures relate to sustainable development goals? As you all know, for most social, economic, and governmental functions, the use of applications enabled by the low latency, high bandwidth connectivity has become even more essential. Low latency is particularly important for the web-based applications that require high speed. Some applications that I can mention are Internet of Things, video conferencing, video games. The new constellations are able to match this requirement because the data travels much faster when the communication satellite is in use. in the low-Earth orbit simply because the distance is much shorter. So, the promise of broadband connectivity with minimal terrestrial infrastructure is almost miraculous from a connectivity as an enabler of SDGs perspective. That is why the emergence of these satellites have been met with enthusiasm in the context of their potential contribution to bridging the global digital divide and global development. But how does the system work? Are these satellites infringing on territorial sovereignty of countries by providing Internet from the skies? Well, we should first understand the technicality behind this to understand how the domestic regulations work. So, the ground stations, they act as a gateway to the Internet and our private networks and the cloud infrastructures. Currently, the distance between the ground stations is required to not exceed about a thousand kilometers. The second component is the user terminal by which the users connect their devices to receive broadband services. These are provided by the satellite company operating the system. Additionally, satellites need an assigned frequency spectrum, a limited natural resource, as the satellites communicate with the Earth through these radio waves. The user terminals will link to the satellite in closest proximity, which may be a different satellite in the constellation at a given time. That satellite will be connected to other satellites, one of which will have a connection to the ground station. Then, there is a cloud infrastructure. The satellite companies will use cloud infrastructure, which is a mutually beneficial relationship as the cloud infrastructures benefit from their connectivity as backup to their existing setup. As I said, the provision of satellite services is not limited to the cloud. within a particular country is subject to that country’s laws and regulations. These are called landing rights and the countries decide the terms of landing rights for themselves. For example, the ground station. For that, the companies will need authorization from each relevant jurisdiction. They will also need to obtain a license to use the frequency spectrum. If they provide their services directly to consumers, they will also likely need an internet service provider license. What is more, the importation of their user terminals will also be subject to import requirements of the national authorities. So the provision of satellite broadband service by a company is subject to a wide range of laws and regulations of the host country. For example, Russia and China have already declared that they will not allow the provision of satellite broadband by foreign service providers. The countries with space capabilities felt that it would be better if the existing domestic control mechanisms were complemented by ownership and control of their own mega constellations. These have been frequently referred to as sovereign structures. Competition is perceived to benefit markets and end-users. So at first glance, it seems like we have more of a good thing. With more choices for all, business models will mature and that should be celebrated. But when we look at the reasoning behind these investments, the governments emphasize their strategic value and the significance of sovereign alternative infrastructure for digital sovereignty and cybersecurity purposes. The financial viability of these ventures is still not certain, so there isn’t much emphasis on that. The digital sovereignty and cybersecurity concerns incentivize countries and regions to align their communication infrastructure and is controlled along their borders. I’m looking at Milton because he coined the term alignment as fragmentation. these ventures are also manifestations of the ongoing fragmentation. Okay, so the foreseeable harms of the new competition to space, particularly the orbital environment, are grave. From launch emissions to orbital debris, the current regulatory framework is simply not sufficient to tackle the problem in time. In time is the operative word here. Due to exponential increase in the number of space objects, space traffic has become more challenging to manage, and more collusions are anticipated. The space environment is becoming more prone to collusional cascading, which means that once a certain threshold is reached, the total volume of space debris will continue to grow. This is because collusions create additional debris, leading to more collusions, creating a cascading effect. Such a catastrophe may render not only the low-Earth orbit, but almost all space resources inaccessible for all, even for future generations. So because there was a competition among powerful nations to each have as many constellations as they could afford, the orbital environment may become unusable for any service, including connectivity and travel. So future generations may be locked in, trying to figure out how to clean up the orbits and restore them. Space resources are limited resources. Orbital space is already congested. Space traffic is difficult to manage, and there is a risk of collusional cascading. So is the promise of constellations, especially the multiplication of space-based internet infrastructure, worth the risks we impose on space environment? So internet governance scholars have known for a long time that advancements in most information communication technologies are perceived in terms of… in terms of their potential impact on global power structures. Megasatellite constellations are also deemed strategic investments, both in terms of space presence and in terms of influence and control over global Internet infrastructure. And I’ll just skip this part. So, the efforts have been deemed to have significant impact on the openness and unity of Internet. But now the same can have an impact on… Anti-fragmentation efforts are deemed to have a significant impact on the openness and unity of the Internet. But now the same can have an impact on the sustainability of space resources. So, we argue that the impulse to compete in the Earth’s orbits, a space that is already congested, should be mitigated in consideration of preserving sustainable orbital environment for future generations. Environmental efforts of global multi-stakeholder Internet governance platforms could inform environmental and sustainable outer space governance efforts, especially as they relate to space-based Internet infrastructure. Thank you.

Moderator:
Thank you very much, Berna. And yes, the timing was perfect. I will now move quickly to Kimberley. Kimberley, you are online. Yes, you are online. Can we please make Kimberley Anastasio a co-host? Thank you. And Kimberley, the floor is yours. Oh, cool. And we can hear you. We heard something.

Kimberley Anastasio:
All right. Can you confirm that everything is working properly? Wonderful, thank you. All right, hello everyone. And thank you to the GigaNet organizers. It is my pleasure to be here today talking to you about a project that is part of my dissertation research at the School of Communication at American University. And this project addresses the intersection of information and communication technologies, ICTs, and the environment focusing on ICT standards. We’re meeting now at the IGF and the Internet Governance Forum to set the environment as the main thematic track for the first time in 2020. And it is definitely not alone in such an endeavor among internet governance organizations. Recently, there are plenty of standard setting organizations, organizations that are establishing rules for how ICTs work and how information circulates on the internet. They’re also turning their attention to environmental concerns and working on the creation of what they call quote unquote greener internet protocols. This paper that I’m presenting today is this first step towards this broader research project on this ICTs standards, in which I talk to people working on ICTs in relation to the environment on the implications that standards can have for enabling and constraining environmental rights. Meaning here, the right to a healthy, clean, safe, and sustainable environment as established by the United Nations Environmental Program. And this is a research that is rooted as part of this communication field called environmental media studies, a field that addresses this overlapping spheres of environmental issues in the production and uses of new media, including ICTs. But still among the researchers that are part of this environmental media studies fields, focus tend to be more on data centers, on AI, on the things that are more closely visible to internet users. And one niche, but fundamental part of ICTs is usually overlooked ICT standards. So I’m now joining two infrastructure studies that deal with this more hidden layer of ICTs. Moreover, we know internet governance scholarship for a long time has been examining internet standardization processes by seeing ICT standards as things that are not just technical but also things that are political. But when we deal about with the values and politics that can be inscribed into internet governance artifacts such as protocols we usually focus on those rights that are more closely related to the digital world like freedom of expression or privacy and data protection. But environmental rights should be considered as this another example of how politics and rights are embedded into internet governance and how then ICT standards may be another venue where the politics of around the environment are enacted. The broader research is an analysis of both the international telecommunication union and the internet engineering task force and the work that they are doing that are related to the environment. For this paper I relied on semi-instructed in-depth interviews that I did with 18 interviewees that are experts that have already advocated for environmental concerns about the internet and its infrastructures or that have done this or they’re currently doing this. So when I talk to the people that are already working on this umbrella between environmental rights in the context of ICTs I ask them about their knowledge about standards and their perspective on where ICT standards could fit the broader agenda and despite most of them mentioning that they have very few knowledge on standards their answers ended up echoing both what is being said on the literature but also on echoed some things that some standard setting organizations are already working on for a while. So echoing the literature interviews situated this debate about ICTs in the environment in these two parallel understandings of the technologies. On one hand digitalization allowing us to enable a more sustainable economy so ICTs being employed basically to tackle climate change and then on the other hand focusing on the negative aspects of digitalization. So how digitalization by itself can also have an impact, be it actually positive or negative on the environment by itself. But in the end, what most of them noted is that to act on this intersection between ICTs and the environment, one should account for both things. So how these standards can help the sector be more environmental friendly, by enabling other things to happen among other sectors, but also how environmental friendly the ICT technologies themselves can be. And then when it comes to establishing what roles these ICT standards could play in enabling to account for both these two things, the promises and the pitfalls of digital technologies, the experts highlighted two main areas of action. So basically establishing a common language or parameters for dealing with this issue, but also establishing mechanisms for accountability. So, and both of these things were mentioned in relation to the standards that would help us avoid carbon emissions in other sectors, but also in standards that are trying to account for or cut down the environmental impact of ICT themselves. And at the center of this discussion on intersection and the role of standards in this case, is basically this necessity for quantification and addressing the materiality of ICTs. And also the fact that these conversations that we’re starting to have more on the standards setting organizations are kind of leading the game. And they come in a context in which the mindset of the ICT sector is one of evidence and consumerism. And we know that quantification is a vital part of what standards is, be them ICT related or not, because the standards are the things that define the procedure. they regulate behaviors, they ensure interoperability, and for that, quantifying, classifying, formalizing processes is key. But when it comes to measuring the environmental impact of ICTs, be it from any perspective, so software, hardware, or networking, this is not an easy task. And this means that even when people do recognize the physicality of the internet and the impact that ICTs can have, there is no simple way to quantify its relation to the environment. Be it from, again, the carbon footprint, the energy consumption, the natural resources extraction, disposability, and things of these sorts. But one thing that we have to keep in mind is that materiality is more than just this palpable thing. As seen on the slide, materiality also refers to this shape and affordances of the physical world, but also the social relations that are part of our lived reality. So we address ICTs as something that is physically located and situated in the environment, although it is surrounded by discourses of immateriality. But to act on this issue does not necessarily mean that we should be stuck if we’re not capable of precisely measuring this entanglement of ICTs in nature, or that we should stop at the measuring phase alone, precisely because we recognize ICTs as something that is relational. An interviewer responded to something similar like that when they identify what they believe to be the root of the problem. The root of the problem being not the environmental impact of the separate devices, products, services, the separate standards themselves, but the socio-economic model behind how society deals with ICTs. One interviewer, for instance, said that standard setting is really important because it allows for environmental best practices to come in and enter its way at the technical level, but that would be in direct competition with the business as usual business model of the entire sector. But some standard setting organizations are already engaging in environmental related discussions, both by creating standards that relate to the environment, but also as organizations themselves engaging in these discussions in other settings. And the two organizations that I’ll be studying further for my dissertation, as I mentioned, is the ITU and the IETF. The ITU has already more than 140 standards that are related to the environment. It has one study group that’s called Environmental Circular Economy that is dedicated to dealing with these issues. The mandate is to work on the environmental challenges on ICTs. And the IETF is something that is more recent, since the ITU has been following the UN Sustainable Development Goals for a while. The IETF is now catching up on this issue as well. It has almost 20 standards that are more closely related to environmental issues. And it also created recently a group that is dedicated to addressing sustainability issues in relation to ICTs among them. Just to mention a couple of examples, the IETF has been dealing with a protocol for Bluetooth to turn Bluetooth less energy consuming from the perspective of internet of things. The ITU has several measurements on the carbon footprint of the ICT sector. And as I mentioned, the scholarship have already established that the standards are these political things that can incentivize or constrain certain behaviors. Two important ICT standard set organizations are already engaging and increasingly acting on environmental matters. And the next step for this research is to delve into the work that they’re doing and try to investigate what areas they are trying to tackle, what interests are also being addressed there, and how can we move forward with this agenda even beyond these two organizations and further down on their standardization process. Thank you very much and I’m available for any questions or comments that you might have.

Moderator:
Thanks very much, Kimberly. Right, we have a few minutes and I will abuse my position as chair and hope that the Daniel, the next, you don’t mind starting a couple of minutes later. Just so that, because we started late. Right, so I will first of all just make some comments on the papers that we heard and the papers that we received from you. Thank you very much for those. Before I try and go with some leading questions and hopefully that will stimulate a bit of a discussion. So if you have questions in the audience, already start thinking about how to formulate them and pray that I don’t raise them first. I’m sure I won’t. Okay, so I will go through the papers in the order that they were presented. So Yig-Chan, who is still online, congratulations. It’s four o’clock in the morning or something, so congratulations. Really interesting paper. The fact that it’s already published means that my comments are moot, but I was thinking of how you could actually take this further. I mean, there’s lots of many interesting statements in your paper and the way you actually position the debates around this and almost show the similarity between the different approaches that you see in the different regions of the world that you looked at. Very interesting. I would have loved to have learned more about your reflections on the sociological nature of that, that whole data divide. So data, knowledge, information, and so on, and see how that fits in. One of the things that kind of touched me in the paper was the way you explained how those differences and those similarities come out. And so I was very happy to read that, and that stimulated a lot of thought in my head at least. However, I would have liked you to have been a bit more argumentative in that sense. You laid out some of the conditions, and you showed that there are differences and there are similarities, and it would have been nice to see how that plays out in different policy debates that are going on. Because I know that the EU has its data for strategy, strategy for data, and I’m sure that that plays, you know, you could do a really interesting policy analysis on that, and that might be a next step that you want to go for, to actually try and unpack how these reflections on epistemic rights and so on actually play out in the policy field. That might be really kind of interesting to see how that comes out, because of course On the one level, there’s a lot of conflictual discourse around the different approaches, but what you show is that there are some fundamental similarities as well, and it might be interesting to do that. I was also thinking that there are maybe other regions in the world that have different approaches to data in that sense, and it might be interesting at some point to also reflect on that, maybe, you know, in the paper, in the next publication, maybe interesting to have a section that looks at global approaches, and maybe I know that in Japan they have a different approach to data, to treating data in that sense, so that might be interesting. Interesting. Vahisha, your paper, and Milton is in the room as well, I could definitely see the questions around the governance of this, and congratulations for that, because I think also in the paper as I read it, I felt that you were not struggling with it, but it was something that you said, okay, I want to look at this as a governance question and not a technical question, but I need to spend lots of time explaining the technical issues in order to understand the governance side. And so although you focus on the fact that you want to do a governance paper, as I was reading it, I felt a lot of the technical knowledge, which was very useful, but actually left little space or mind space for the reader for those bigger governance questions, which are really interesting. I also was thinking a bit about how you addressed, so how you said you talked about your narrative prerogative, and how you addressed the story from 2019 first, but that was maybe the problem that made the context and the issue visible. So maybe that’s how you do that. You don’t have to say, I lied, right? I was also wondering, so you mentioned that there are different voting, you said that actually the platforms and the certificate authorities agree on a lot of things, right? I was wondering, the process leading up to the votes, that would be really interesting to understand, right? And then, of course, you mentioned that it’s a private organization dealing with public goods, and I’m going to ask the question that you asked us to ask you. Can you please explain how government are involved in that, because they are not explicitly involved, but they are involved, right? And that would be interesting to see, because, of course, there are multiple dimensions to these stories, and I would have liked to have heard a bit more, or teased you a bit more in that sense. Berna and Joanna, thank you very much for your paper as well. When I first read the title, I thought that you were trying to do a lot in this paper. It’s covering low-Earth satellite, low-Earth orbit satellites, it’s covering environmental issues, it’s covering cyber security issues, it’s covering quite a lot in the paper. And I found that, at first I was thinking, wow, how are they going to do all this? But you managed, so that was good. I was wondering a bit, when I looked through the paper, I felt that the ordering was sometimes, there were some bits that probably could have gone a bit earlier, in order to help me understand the flow of the paper. So I’ll give you some examples later, but, I mean, for example, you introduced the concept of mega-constellation, and I didn’t know what that was until I’d read two pages later. So things like that. But also, in the way that the argument builds up, I was wondering, Section 4 may be more interesting than Section 3, and vice versa. There are, you’ve mentioned, rightly, the security concerns, right? But I was wondering, so there are also, the security concerns and the sustainability concerns actually cross over quite a bit, I think. Because if somebody were to shoot one of these things out of the sky, and if then the cascading effect happens, I was wondering, you treat them as two separate things. And I was thinking it might be interesting to also show that there are direct connections between them. those two. And then you, of course, you go on and you talk about the ITU, but I know that there have been international collaboration efforts. I know the European Union has been trying at least for a long time with the space policy to develop things, and I was wondering, I didn’t really see mention of that too much, and I thought that might be interesting to bring in, because that then addresses the questions that you had raised in the tensions between the national and the global, right? And there I would be interested, you talked about sovereignty as, or states using sovereignty to say we need our own mega constellation, but then in the end that still needs a coordination effort, right? Unless they want to knock each other out of the sky, right? So that might be, that’s also something that I think you could raise in your paper a bit more, okay? And then in terms of sustainability, it might be worthwhile to clarify at the beginning of the paper what you mean, because I was also thinking, oh, is it more environmentally friendly to put satellites in low Earth orbit than to have routers or whatever data centers on the planet Earth? But actually, no, you meant something else. Okay. Kimberly Rushing. Environmental rights. Thank you very much for this paper. Really worthwhile effort. It’s part of a broader project, and I’d love to know a bit more about how that fits in. I think that could be a bit clearer in the paper. You focus on the role of standards authorities. You look at the IETF and the ITU. I was wondering, in your reflections, do you actually think about the normative biases that are built in to these actors? I mean, you mentioned the work that’s been done by other scholars. that try and unpack those. But right now, you’ve gone through the interviews and you’re looking at those quite literally. And I was wondering if you do that. I think there’s also quite a lot of work, maybe not directly just on internet standards organizations, but standardizations bodies as a whole. And I know you come from the literature that’s looking very much at sustainability and standards. But there may be some work. Also, there was quite a lot of work in the 1980s and the early 1990s published in this space. So that might be interesting for you to look at. I was also thinking, you do kind of implicitly look, or no, you explicitly mentioned it in your presentation. You look at environmental rights in the human rights context. And I think that was very interesting as well. One of the things, I know you’re only looking at standards, but another area where there’s been quite a lot of reflection is on the implementation of data centers and the environmental consequences of those. And I was wondering if some of those debates in the literature might not be interesting. So those are my far too long, but hopefully useful comments. I would like to see if there are any questions from the floor. Microphones have been put out. So if you want to raise a question, please go and stand behind the mic. Otherwise, we’ll go back to the presenters for a quick response. Milton is going to the mic. Go ahead.

Audience:
Just a question about the satellite paper. You talked about the creation of these government-run mega constellations and somehow that’s related to fragmentation. By the way, I agree with Jamal that it’s hard to combine the environmental… global commons Tragedy of the commons aspect and the fragmentation aspect of your paper, but I’m going to focus on fragmentation You know what What are they actually doing are they proposing to not allow other? satellites to distribute signals to their country And what is their leverage for doing that and then they’re going to set up their own? What why do they need to set up their own mega constellation to to do that if they’re only concerned about their own territory

Moderator:
If there are no other questions at this moment, we’ll go back to the panelists should we go back in the order You can did you want to mention something?Vagisha

Vagisha Srivastava:
Thank you for the comments Because of lack of time I’m not going to address all of them But I think the voting process that you asked about was interesting me when we were going through the ballot readings and everything also the interviews What we learned about was that a lot of formal language that goes to voting is already agreed upon and the consensus mechanism It’s built pre voting or pre setting the language itself that could be one of the reasons why a lot of these votes are Non conflicting, but it is still interesting to see how Sort of sees react to the browsers react to the process itself Do you have any specific question that you want me to answer?

Berna Akcali Gur:
Yeah, okay, Joanna may add to my comments if she I I think she’s still online. Yes, okay, so Jamal, thank you for your comments and I agree, we are trying to address a few important topics all at one in one paper and I’ll take a look at your recommendations about the EU space policy. I was thinking that the EU space policy and the fact that they are trying to also deploy their own satellite constellation may be a contradictory move because I think there was a paper, there was a EU paper, research paper represented to the parliament saying that the EU doesn’t actually need to own a mega constellation for purposes of access but they still thought that it was important from a strategic and security perspective but I should maybe add that to the paper. And about Milton’s question, so from my understanding of fragmentation, there are different manifestations of fragmentation and one manifestation is through government policy and regulations where the governments try and establish control over infrastructure and the components used for that infrastructure. So decoupling at the 5G infrastructure, for example, was an example of that, whereas the groups of countries have refused to use each other’s technology for cybersecurity reasons and of course there was deeper geopolitical motives behind that as well. So when I look at the government papers justifying investment in these mega constellations. which are elaborate infrastructures, the governments refer to them as sovereign infrastructures that are necessary for cyber sovereignty and cyber security reasons. And so it is from their policy papers that I see that they see these infrastructures, although they are not terrestrially located within the land, control of these infrastructure is still by the companies that are located within their territories. So they are very much seen as a territorial infrastructure from those that can deploy these constellations. So what about the others? For the previous research that we had done, the countries that cannot have their own mega constellations but are planning to use them, see data governance, for example, as their major concern. So, for example, the gateways to the internet, the ground, the gateways to the internet, the ground infrastructures that are need to be, that you need to have every 1000 kilometers. For example, the countries were saying that if we are going to authorize services of these mega constellations, maybe we would like to require them to have a ground station within our territory, even if they don’t need one, even if there is one within 1000 kilometers. And it is to control, the intention is to control cross-border data transfers. And so it is still, the intention is to control the cross-border data transfers and to maintain the control that they already have or extend that control in accordance with their policies that are still developing as the geopolitical tensions intensify. So I hope that was, that answered the question.

Moderator:
I think you have to, Joanna, did you want to add something? Nothing further from me, okay, perfect, Kimberly. Okay, I’ll try to be very fast and say

Kimberley Anastasio:
thank you very much for your comments. Jamal, you mentioned three things that are the things that I’m currently working on, which I think it’s very appropriate as feedback. Yes, I am trying to now situate my study better among studies that deal more with standardization as a whole and not just standardization from an ICT perspective. And also just to explain a little bit further this project, the bulk of the project is based on a methodology that involves the content analysis of the almost 200 standards that have been either approved under discussion or rejected in the two organizations that I am analyzing and interviews with the participants of these organizations, so the ITU members and the IETF members. But in order for me to properly understand the work that these organizations are doing in light of the possibilities for the ICT standardization sector as a whole, I felt it was needed for me to come up with a framework of action, not only from the literature on the environmental impact of ICTs, but also from the perspective of those working on the ground trying to build this agenda in international organizations and spaces like that. So that’s where this smaller project fits the broader one. It is to help me come up with this framework of action through which I’ll then analyze how two particular organizations are engaging in this matter. But thank you very much, and I’ll wait for your further comments on the paper. So thank you, thank you all.

Moderator:
Right, thank you very much for all of the interesting papers, and hopefully. This has, well, I think this has been a great start to the symposium, so thanks very much to all of the speakers and all of the paper writers and everything like that. Thanks a lot. I will now ask you to leave the floor. Should we just leave it here? Yeah? You take it from me. Danielle, could I, I think, given the interest of time, we won’t have a five minute break and we’ll move straight to the second panel. Is that okay with you, Danielle? Yeah, okay, we can have a bathroom break. You will be timed. So if you need a couple of minutes, just use a couple of minutes and then otherwise we’ll get back to you straight away. Okay. Well, I was gonna say you can sit, I’m gonna sit down there. You don’t need me, do you? No, okay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I’m Danielle Flonk. I’m an assistant professor in international relations at Hitosubashi University in Tokyo and I’ll be chairing and discussing this session. Today we have Nanette Levinson who’s presenting on institutional change in cyber governance, Jamie Stewart on women peace and cybersecurity in Southeast Asia, Kamesh and Ghazim Rizvi on making design and utilization of generative AI technologies ethical. Basically everybody gets ten minutes to present after which I will give five minutes of feedback. Nanette? Is Nanette here? Okay, go ahead.

Nanette Levinson:
Yes, can you hear me? Yes, perfect. Thank you. Good morning Kyoto time, good evening my time, good day to whatever time zone one may be on. The papers that were just presented in the first panel set the scene I think beautifully. They were fantastic papers, wonderful discussion. I’m going to share my screen now. I believe that’s working, excellent. I’m going to share with you some work from the past year of a project that I’ve been working on for the last four years. I’ve been researching the United Nations open-ended working group dealing with cybersecurity that began in 2019 to 2021 and the second rendition continues now and actually it is due to go until 2025. And as we all know, this has been a particularly unusual time period punctuated by a pandemic and the war in Ukraine. What I’d like to do in my presentation is focus just on this past year, 2022 to 2023. I’m going to share with you a few of my key research questions, my major findings, several of them, and just very briefly some thoughts on the future research in this arena. I want to highlight three research questions. I’ve been thinking about the field of internet governance for a number of, or at least several decades, and I wanted to have a chance in this paper to take a long-term view, thinking about institutional change using various disciplinary approaches. And I was particularly interested in what could be called deinstitutionalization processes in cyber governance. The organization on which I focus within the discussions at the Open-Ended Working Group is a proposal for something called a program of action, which involves a more regular way to include other stakeholders, stakeholders other than governments, as a part of regular institutional dialogue related to cyber security at the United Nations. In the paper, I formulate a cross-disciplinary approach to these analyses, and I ask the question, how do the findings from this longitudinal study of the Open-Ended Working Group relate to work on institutional change? And further, I ask what possible catalytic factors could be at work related to such changes? And in order to do that, I go back to some work that I did a little bit earlier, where I looked at institutional change indicators, and I want to highlight here here, three of them. First, an indicator for institutional change or incipient institutional change is the absence of an authoritative analogy or the presence of inconsistent isomorphic poles. Second indicator, a change in the legitimacy of an idea and a change in the rhetoric related to it. Third indicator, and these are not sequential, they’re rather a chaotic continuum. The third one is the emergence of a new variety of organizational arrangements consistent with a new idea. And all of this, all of these indicators I look against the setting, the backdrop of increasing uncertainty and turbulence in the environmental setting of the open-ended working group and indeed major geopolitical pulls. Here are a few of my findings at a glance. My earlier work on the open-ended working group from 2019 to 2021 noted the presence of what I called an idea galaxy. And what I mean by that is simply a cluster of specific words that appear near one another. And the subsequent positioning of these words next to or very near to a value or a norm that is already more generally accepted. So in 2019 to 2021, I discovered the following words, human rights, gender, sustainable development or international development or developing country and less frequently non-state actors or multi-stakeholders. And they often were linked both in oral presentations and in written submissions. And I use content analyses on all of these. They were most often linked to sections dealing with capacity building. Interestingly, the 2021. open-ended working group final report, which was adopted by consensus. And again, this echoes some of the discussion about consensus in standards organizations highlighted in the earlier papers. Interestingly, those words were adopted by consensus in that 2021 open-ended working group. But what has occurred in the past year, 2022, 2023, is a fascinating development. The same idea cluster appears in many submissions, many oral presentations, many informal sessions with other stakeholders, but there also appears another opposing cluster or idea galaxy that I term a dueling idea galaxy. Let me say more about this. We remember the idea cluster that was accepted by consensus in 2021. This appears in much of the discussion and was going to appear and did appear in draft versions of the annual progress report that was supposed to be adopted by consensus at the fifth substantive session just a couple of months ago in New York City at the United Nations. However, interestingly, a dueling idea cluster was introduced on the very last day of that discussion in opposition to accepting the report with those words from 2021 in it as a consensus agreement. And instead, the Russian delegation, along with, I guess, the Chinese delegation, Belarus, and maybe four or five other countries, proposed or said that it was not going to go along with consensus, that it strongly wanted and it had a rationale. And I put this in italics that their idea cluster was rewarding such as convention. or treaty. And this really signified their commitment to the development of new norms in the cybersecurity area. And it also signified opposition to this program of action idea as a part of regular institutional dialogue. I do wanna point out that the idea for a treaty was not new in 2022, 2023, it appears throughout discussions. But what is new is its placement in direct opposition to the first idea galaxy above, the one that was adopted by consensus in 21. These dueling clusters reflect the presence of catalytic factors, especially the war in Ukraine, and they provide indications of potential institutional change and increasing turbulence, possibly marking the end of a long cycle of internet governance trajectory that included roles, even though appropriate roles, quote unquote in certain terminology for non-state actor stakeholders. So let me conclude and talk a little bit about future research. The outcome that I just alluded to of the 2022, 2023 discussions in terms of ultimately getting consensus on the annual progress report of the Open-Ended Working Group that was just submitted to the General Assembly, I guess in September, went down to the very last moments of the very last day of that final fifth substantive session. And the only way that the consensus was achieved was by the Open-Ended Working Group Chair, Ambassador Garfour, who took a suspension and went around to do informal negotiations and he solved the dissensus by what he termed in his words, quote, technical adjustments to assure the consensus. And the delegation head termed his technical adjustment as footnote diplomacy. Very quickly, the chair crafted two separate independent footnotes, and I call these balancing the dueling idea on galaxies. Each of the footnotes gave a small amount of recognition to each of those idea clusters and set the stage, of course, for further discussion in 2023 to 2024 open-ended working group ahead. And there are many dates set ahead and discussions related to this topic. So in sum, there are indications of potential institutional change. My project is going to continue to identify any emergent or disappearing idea galaxies in the year ahead. These relate to those conflicting isomorphic poles that I began with as indicators of institutional change. And I hope to be able to use, now that we are primarily post-pandemic, a more mixed methods approach to capture the more individual level idea entrepreneurship in these turbulent times, times that continue to catalyze change processes. And with that, I’m gonna turn the floor back to our chair. Thank you.

Moderator:
Thank you, Nanette. I really like this paper. I’m gonna give feedback now and then we go to the next paper. So I really like this paper because it addresses the big questions in global internet governance. And it looks at recent developments in an important institution, namely the open-ended working group. I have two broader feedback points, one on theory and the second on empirics. So on theory, a number of things, I think, could be further clarified. First, you use a lot of concepts, especially when you set out the different indicators of institutionalization and deinstitutionalization and the stages of institutionalization. So, do you really need all these concepts, such as there was habitualization, objectification, sedimentation? Many of these concepts do not come back in the analysis, and I would only focus on those that you actually need for your analysis, and define more clearly what you mean by them. Second, you use institutionalization and deinstitutionalization processes as a binary, but what about the literature on contested multilateralism or counterinstitutionalization? There authors emphasize competitive regime creation, regime shifting, so there’s more than just making and breaking of an institution. For instance, parallel regime creation, right? Like the Open Ended Working Group was an alternative to the UNGGE. Institutions can gain in relevance, or lose relevance, or even become zombies. So is this binary really maybe too limited? With regard to empirics, the findings address three main categories, emerging technologies, crises, and idea galaxies, but where do these categories come from? Why did you pick these and not others? And how are they theoretically related to institutionalization? I think the section about idea galaxies is the most elaborate one, so it’s clear here which topics you focus on, however, I think you could elaborate more on why you focus on certain ideas and not on others. For instance, you focus on issues such as gender, human rights, sustainable development, but why not on other issues such as democracy and equality? Also I think this empirical section could be a paper of its own, so you could consider focusing the paper on idea galaxies only, and thoroughly setting out your theory and operationalization, and then things like emerging technologies and crises could function maybe more as scope conditions to competing idea galaxies. Thank you. I give the floor to Jamie.

Jamie Stewart:
Hello, everyone. I hope you can hear me well. Yes. Wonderful. Thank you very much. Let me just start my presentation. Thank you all for having me here. And I do deeply apologize for being remote. I was hoping to be there in person but was unable to make it. I’m Jamie. I’m from UNU in Macau. That is the United Nations University. And I’m a senior researcher and team lead there. I’m going to be talking about something that’s quite closely related to the presentation of Nanette. But it is a little bit of a different focus. So it’s infrastructure that would be in the stratosphere. And then finally, with SDG 10, reducing inequality. So it would help to reduce the urban-rural divide and also the gender inequality in use of internet services. Now, in terms of the ITU process, we have the World Radio Communication Conference, which is coming up in November and December of this year in Dubai. There, we’re going to be discussing ways of allowing HIBs to use additional frequency bands. It’s not an opposition to technocentric views of cybersecurity, which have a focus on protection of technical systems and networks. But rather, it’s about extending that focus to go beyond technical systems and think about cybersecurity as ensuring expression and exercise of human rights, particularly around access to information and freedom of the press. And this sort of experiment is a… …described in the slide. This slide shows our company vision. And regardless of which level we look at that at, the national level, the organisational level, even the individual level, these things, that protection should be treated as a mechanism for which we should treat human security and protect human rights. So in this work, this is a piece of research that was done in partnership with the UN Women Regional Data Centre on the Asia Pacific. We centralised the concept of safety and wellbeing and looked at how cyber security practices, particularly within civil society and those who are working in the space of human rights defence, can threaten or disempower users of technology. And so this is also working beyond human factors within cyber security, which indicate to us that people and their behaviours, their thoughts and feelings are important for cyber security practices. That is a component here, but it’s not the central element. The central element is the protection of people and human rights as the function of cyber security. And this is really nicely supported by the Association for Progressive Communications, which have come out with a definition of cyber security that centralise human rights and suggest that cyber security and human rights in and of themselves are complementary and mutually reinforcing and interdependent, and therefore we have to pursue them both together to promote freedom and security. So we can take now the foundation of human-centric cyber security, and then what we did is add a gendered lens on top of that, because what we’re interested in is cyber security as a function of the WPS agenda and how we can support women and girls within the context of peace and security. So as I’ve mentioned already, the cyber security research tends to focus on the technical. And we are interested in taking human factors into cybersecurity, but that is both understanding psychological behavioral factors as they shape cybersecurity, as well as a focus on human rights, harms and safety. Alongside those two critical elements. We also recognize that gender fundamentally shapes cybersecurity. Oops, excuse me. And that is because for a few major reasons. The first is that there are gender differences and access and uses to technologies as well as interactions in online spaces. All of these things influence cybersecurity posture and cyber resilience. We also know from a lot of work that’s been that’s been happening within the genders, gender and violence space online, is that online gender dynamics tend to perpetuate power relationships that prevalent offline. So those masculine masculinized norms and how they influence social relationships replicated in online spaces. And we also recognize that women experience distinct types of online violence, and that these types of online violence are more persuasive for women than they are for me. This is all alongside the gender digital divide which I’ll talk about a little bit more. So what does cybersecurity look like in Southeast Asia. Well, the rapid expansion of digital technologies and internet connectivity within the region as well as the variance in terms of internet connection across different countries and development across different challenges. So what we see is that there are some countries within the region which are highly prepared and doing a lot of a lot of really critical and novel work in terms of governance in this area and others that are not the OHCHR just this year released a report on on cybersecurity within Southeast Asia, and what they found what they said. was that the regulatory instruments that are being developed within the region, where there is a high level of investment in surveillance, and in particular are increasing what they considered to be arbitrary and disproportional restrictions on freedom of expression and privacy. And there were six key issues that they suggested were relevant for the region. And I’m not gonna go through these in a lot of details because there’s quite a bit for me to cover in the presentation. But what I will suggest is that these critical elements spreading of hate speech, coordinated attacks, technology surveillance, and restrictive frameworks, criminalization, and internet shutdowns. I would suggest those who are interested to read this report because it’s very enlightening. And as I said, this is quite aligned to the conversation, the discussion of Nanette, where we talked about broader conversations that might be in opposition to human rights. One of the things that’s come up recently is that the general assembly have expressed concerns over these broader consensus that is thinking that cyber crime and this legislation might be misused against human rights defenders and endanger human rights more generally. So we see this a lot in the recognition of what’s happening with journalists around the world and their freedom of speech. Sorry, I’m running through things relatively quickly because I know I don’t have a lot of time. We focused on women, civil society, and human rights defenders in the region. And this group are very disproportionately affected by cyber attacks. And that is because they’re working with marginalized groups in sensitive politicized topics. They are often not well protected by laws and regulations where they exist. They have little say in those laws and regulations. And sometimes, and we know this from direct case study work, that are actually used, those laws and regulations used to directly. harm them and they face a gender digital divide meaning that they’re less represented within the cyber security field and technical roles and therefore they’re less likely to take that into their protect zone. So we wanted to look at cyber security risks and resilience with the goal of promoting human and digital rights of women and girls in Southeast Asia and what we did was quite a complex project that involved a review of the national and regional context. We did an online survey with those who are employed in civil society organizations advocating for women. We interviewed a whole range of women human rights defenders but specifically those who are working in the space of digital rights as well and then we conducted a cyber audit. I’m not going to be talking about all of this and the report will be launched probably early next year so those of you who are interested can contact me about that. I just wanted to really briefly go over something that I think is of quite a lot of importance. This is not comparing it to other regions around the world. What this is is trends in legislation that are happening within the Southeast Asian region and the types of legislation that the amount of legislation within cyberspace that is happening. So you can see there in terms of the top figure the year there was a large increase 15. This has collapsed across countries of new legislative and regulatory frameworks that came about and there were five in 2022 but this the count here was based on the research. So there was a lot of new legislation that’s happening in this area and some of it looks positive but it may not be necessarily used in the same way for all people. So what we know about these laws in Southeast Asia is that the increasing number of laws and the type of laws that are happening allow for surveillance, search and seizure and there are a whole variety of as a set of case studies around this where there’s targeted monitoring including CCTV cameras, collection of biometric data, the the surveillance of protestors and taking the photos of protestors and using AI. Yep, great. I’ll rush through the end. And the using of those types of technologies in order to target human rights defenders. So we know that all of these, I won’t go through them in detail, have a lot of impact specifically on human rights defenders. Again, I won’t go through this. Basically, what I wanted to say more generally from this data is that there are, as I said, there’s variance in the way that gender equality and internet freedom is enacted across Southeast Asia. And even in places where there are high levels of cybersecurity frameworks, they don’t necessarily function in the same way for women’s ESOs and human rights defenders. So needless to say, in our research, we found that technology was actually at the heart of the work that civil society are engaging in, or like our life or the life of our work, which was suggested by women, and that social media was a critical asset for their functioning, which was also a place where they were directly targeted. They faced a huge variety of cyber threats. And we did do some comparison to say that there was high levels of online harassment, misinformation, cyber bombing, and a huge number of our sample had false information spread about them. We also found that there was less cyber resilience amongst these organizations and human rights defenders than what we then what we would hope that there’s some that half felt prepared could respond and recover. But that also means that half did not. And we really need to what we found in here is that we need to decentralize the constant new things, but actually allow people to use the features of digital digital technology in safe and secure ways. The content and the cyber attacks that were faced by women CSOs and women human rights defenders were highly gendered. And I’ve got some quick case studies here, the photos taken without consent and fabricated the deep fakes and used to dehumanize and discredit human rights defenders, that there was an idea of that the human rights defenders should expect to experience violence online and harassment. There were deep threats and discrediting of feminist movements and silencing and removal of safe spaces for discourse. I really wanted to just focus on the last recommendation before I must finish, which is that aside from some of the organizational level recommendations that we’re putting forward, we also need to ensure and what we’re recommending from this work is that there’s gender responsive means that human centric for recourse against cyber attacks and threats. And this is made particularly difficult within a context where there is the perpetrators of those cyber attacks may likely to be state actors or coordinated attacks that are sponsored and highly or well-funded. So we need to make sure that our frameworks are aligned with this and where we’re endorsing these global frameworks, they take this into account. Thank you very much for everybody for your time.

Moderator:
Thank you, Jamie. I think that’s super important and interesting research and it’s a piece that I could relate to a lot personally, so I really appreciate it. I basically have three feedback points, one on the scope of your concepts, one on actors and one on future steps. So with regard to scope, what do you actually include in your definition of threat in your research? So in your introduction, you speak of cyber attacks. Later, you also talk about digital literacy, misinformation and these are all different kinds. of threats, right? The mechanism about defending yourself against a cyber attack such as doxing or stalking is, I would say, way more direct and immediate than reducing misinformation. So should you not make a categorization of the type of threats and how this impacts marginalized communities? How does the causal mechanism of threat differ here and, by extension, how does this call for different types of regulation? At the same time, how far do you think regulation can actually reach? At some point you made a very interesting point that cybercrime legislation is being misused to target human rights defenders. So I think this is a very relevant and interesting point and I think you can make a similar argument about harassers sometimes weaponizing anti-harassment tools built into digital platforms basically to harass other people or that they pick certain platforms with the most limited options for moderation. So how effective do you think regulation actually is and at the same time what’s the alternative? Then on my second point about actors, it remained unclear to me who the actors in this piece really are. For instance, it would help if you could give some examples of women human rights defenders, women civil society organizations. Also maybe some anecdotes at the start could really help the reader understand what type of cases you’re talking about. A similar thing applies to threats. What actors are we talking about here? Because there’s lone wolves and trolls but there’s also coordinated attacks by groups, maybe political groups. So how does this affect policy recommendations? And then finally on future steps. So currently your recommendations are quite broad and I think you could make it a bit more concrete. For instance, you said that social media is a critical tool for operations but also increases risk exposure. So what instances of risk exposure on social… media did you see, and how would you recommend tackling this issue? Thank you. I give the floor to you guys.

Kazim Rizvi:
I hope I’m audible. Thank you to the chair, thank you to GIGANET and IGF for hosting us today in Kyoto on a very lovely morning and thank you to Kamesh for making it just in time. Unfortunately, he had to miss his flight but he got a new flight today and he made it in time so that’s great to see. So first of all, just very quickly introducing myself, my name is Kazim Rizvi, I’m the founding editor of The Dialogue, we are a tech policy think tank based out of New Delhi, India and we work across multiple issues, one of them is AI and we are really excited to present this paper which is authored by Kamesh along with his colleagues who are in India, most likely they are enjoying their Sunday morning unlike Kamesh and me but I think we are having a better time presenting this paper. So very quickly, we don’t want to waste too much time, so very quickly what is the objective and what are we trying to do here and as you see on the titles, this is basically looking at enabling responsible AI in India and we have come up with some principles and these are principles which we believe need to be implemented at different stages and the uniqueness about this paper is that the principles cut across the development stage of AI, the deployment as well as the usage by various actors and consumers and I think that’s where the uniqueness lies in the paper and that’s what we are trying to do because this has not been discussed in India at least till today and that’s the idea for us to sort of work on this paper. So if we move to the next slide, just sort of going through the outline very quickly and I think in the last year or so, we’ve been accustomed to hearing the word AI a lot more, right, with the rise of generative AI applications, most of you in this room and listening to us online are having a direct interaction. And we see AI proliferating across a lot of different ecosystems, such as technology, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI. And what we see as researchers is that the technology is moving away from just a B to B to a B to C technology, where consumers are directly interfacing with AI models to, you know, help them with their daily tasks, professional duties, et cetera. So, you know, we see a lot of algorithms. In many ways, you know, the term we coined is algorithms are the atom of the Internet, right? You cannot live without them. And they make or they sort of create the structure of the Internet, which is the modern Internet as of today, and the services which are provided. So, for example, you know, I have a cat in my house, and I go on social media, and then I’m seeing multiple options to, you know, buy different type of food and stuff for the cat. And I’m not saying that you can’t do that, right? You know, you can buy a specific kind of food, but if you go to social media and sort of post some pictures, then you’ll be, you know, getting different, different type of suggestions and interventions. So it’s really taking over in terms of giving you ideas, giving you inputs from the music you listen to the kind of, you know, the places you want to visit. It’s really everywhere in our lives today. So while it’s doing a lot of good things, there are certain challenges, right? So, you know, we have a lot of challenges, and I think, you know, as we move to the next slide, what we’ve tried to do here in this paper is understand those challenges, and identify what are the implementing frameworks, governments, scholars, development organizations, multilateral organizations, tech companies have to work towards as well as civil society, right? So what we’ve done is we’ve sort of mapped out a certain specific ways of identifying responsible AI. Maybe we can move to the next slide. Yeah. So in this paper, we’ve mapped impacts and harm, and what we’ve done is we’ve looked at AI at the development stage, so the design and development at the algorithmic development model stage, where we’ve analyzed what are the harms which could take place when you’re designing the technology, when you’re really coding it, when you’re sort of coming up with algorithms, when you’re collecting data, what kind of data you’re collecting, how should you collect the data, what is your authenticity, etc. So that is one stage, which we’ve understood. And the second stage is the harm stage, which is the post-development development deployment stage, when the technology is deployed by industries. It could be, you know, horizontal industries such as finance, education, even environment sustainability, social media, whatever industry is using the technology, there are certain harms present over there. So how do we protect ourselves from those harms? So these are the two stages which we’ve come up with, and again, this is a very unique approach because most of the principles which you see, be it the OECD principles or the UNICEF principles or different multilateral principles or bilateral principles, they’re mostly focusing on the deployment sort of stage, and this is something which we have sort of figured out that design, deployment, and development, all three stages have to be met, and that’s the focus for the paper. If you go to the next slide, it’s pretty much sort of summing up, you know, what we are doing. So three stakeholders, which is the developer, and then you have the deployer, and then the end user, the end population. What are the principles for the end population as well? So, let’s say if you develop a health tech application, there are principles for the technologist, the coders who have designed the application. There are principles for the hospitals, clinics, doctors who are using the technology, and then there are principles when it comes to

Moderator:
how consumers are interfacing with the technology, and how do you protect them as well? So, these are the three stages and the stakeholders. So, then we’ve really mapped these harms across the AI lifecycle, and over here, Kamesh, if you want to quickly come in and talk a little bit about how we’ve done these mapping of different principles and what those principles are. I hope I’m audible. Yeah, I guess I am.

Kamesh Shekar:
So, thank you, Kazim, for setting the context for the paper itself. So, just coming from where you left itself, what the paper is trying to do, and why this is a unique way of looking at things, is basically most of the… Yeah. Basically, most of the times when we are, most of the frameworks which are available outside there is overly concentrated on the risk management which comes at the AI developer level. But what we have went about doing is that, is looked into a 360 degree approach where we wanted to move beyond the developer and ask a question about if at all a developer is designing a technology ethically, does that mean that when a technology is deployed or used, there will not be any fall through the cracks happening? So, just to answer this question is where we have come up with the model of principle-based ecosystem approach. And the model is basically talks about, as Kasim mentioned… is mapping all the principles for various stakeholders who come within the ecosystem itself, such that collectively, we could actually ensure that some of the adverse impacts that we have mapped doesn’t happen. So firstly, what we have done is came up. Firstly, what we did is we took five adverse impacts, and we chose exclusion, false prediction, and copyright infringement, privacy concerns, and information disorder itself. Why these five is basically because these are the most top five aspects which are talked about when it comes to AI implications itself. But this is not an exhaustive list. This is just a start of what we are doing. Then what we went about doing is, as Kazim already mentioned, we tried to look at impact and harm. For us, impact and harm is merely is this that impact is just a construct of a harm which could happen later, and how much you are aware of that. And harm is obviously exposed itself, and the actual harm happens. So our ideology behind the slide itself is this, how the first aspect of our paper itself is that. Whenever we talk about exclusion or any of these adverse impacts, we don’t really look at it from the granular level, where there are different stakeholders involved at different stages of the AI lifecycle itself, contributing at the different levels, which accumulates into something like an exclusion happening. So we went about going and mapping all of those impacts and harms, which occurs at the different stages of the lifecycle of the AI itself. One important aspect here, if you could look into the slide, is we have two important additional stages that we have added, which is the gray, which is your actual operationalization, which is at the deployment level. And then we went about doing. it at the direct usage, which is what Kazim mentioned about B2C implications coming into the picture. So next slide. Yeah. I think we skipped one. Now it’s working. Now it’s working. Now it’s working. Yeah. Now it is working. So yeah. So yeah. This is the one. So now that we know impact and harms are mapped, then what the paper goes about doing is mapping various principles that could be followed by different stakeholders at the different stages of the lifecycle. And here, if you could see, these are some of the principles which have been extracted from globally available frameworks, like your OECD, UN, and EU, and et cetera, and stuff. And also India’s G20 declaration, which also speaks about some of these principles. In addition to that, from our research also, we have suggested some new principles. So after principles, what we go about doing is that is the operationalization. Here the unique aspect that the paper tries to do is that when we talk about human in the loop as a principle, most of the times, we just use the term as passed by. But when it comes to operationalization, that particular principle means differently at different stages of the lifecycle. And that exact difference is what we wanted to bring out from this paper. For example, if you could look at this, at the planning to the build and use stage, human in the loop really means that you want to engage with your stakeholders, and et cetera, and stuff. Whereas for the actual operationalization stage, it could mean that you have to give a human anatomy to people, a subjected human anatomy to people, where they could also take some decisions against whatever the AI decision has been given. So we have brought out such differences into picture within the operationalization. Um, now the, now that impact is done, operationalization, sorry, principles are mapped, operationalization is done. Finally, the paper to just give a holistic approach, we also talk about the implementation which comes from your government here because like our research is extensively in Indian context. So we went about looking at like, you know, what is, um, what can be done by the Indian context in terms of like implementing such a framework where we look at like a domestic coordination, which is important within the legislations. And then international cooperation is important because various like, um, aspects are happening at like different, uh, um, you know, um, institutional level and like jurisdictional level and like bilateral level, et cetera, and stuff. Also like India moving towards a chair of being GPA, I guess like this is like this paper adds a great value in terms of starting that conversation, just one minute. And finally, we also talk about like establishing a public and private collaboration in terms of like how we can implement it. And like, this is something that like as an organization, we keep pushing in terms of like it not necessarily has to be something at the compliance level. It can also come in the level of like, you know, making it like a, you know, value proposition for the businesses to take it.

Moderator:
So I’m going to cut you here, I’m sorry. Thank you so much for your presentation. I think you bring up a very relevant question, um, that addresses a lot of blind spots in current academia. Um, and instead of looking at only developers, you also look at deployers and their role in responsible use of AI. And basically you have three main points of feedback, uh, one on the focus of your argument, one on the narrowing down of concepts, and one on your causal mechanisms. So with the focus of your argument, like I said, instead of looking at developers, you look at deployers. But I wondered why not also focus on, uh, users, like end users? So you think users, like, I don’t think you think that users have no role to play. in responsible and ethical use of AI, right? And especially, since you talk a lot about generative AI, this is often steered by end users. So what is your perception on their role to, like, on their role that they play in the responsible use of AI? Second, on narrowing down concepts, I think often you can make your argument more concrete. So for instance, on page 16, you argue that the AI solutions might be producing an error or may be designed to capture some biased parameters to produce a suggested outcome. However, real-life harms of such outcomes only translate into action when AI deployers blindly use the same for making real-life decisions. So in this case, I was like, OK, but then what do you mean by real-life harms? What do you mean by real-life decisions? What do you mean with AI solutions, you know? So this sometimes gets so broad that it could mean anything. And I think sometimes specifying what you mean would actually help making your argument. So arguments often remain quite abstract. And I think you can make it more concrete by basically defining what do you mean by AI? What do you mean by AI solutions? And just mention a couple of examples. And then finally, on conceptualization and causal mechanisms, we saw the figure, as well, on the AI lifecycle. I had a number of questions about this model. So on a more general level, it kind of remained unclear to me where this model is kind of derived from. Where does this come from? How did you arrive at this model? So I think you need a bit more like, OK, what is already out there? And what do we use to come to this model? Second, you argue that you want to focus on deployers, but the largest part of the model is still developers. So it’s not completely in line with the argument that you’re making in the paper. You say, OK, everybody focus on developers. We focus on deployers. But then in the model. in the AI lifecycle, it’s mostly developers. So what really is the role of deployers in this model? And then finally, I thought it was interesting, there’s this, like the top two categories were like exclusion and false prediction. But there was no impact on end users. And I wondered why. Because I thought like, there’s a lot of impact on end users if we think about exclusion and false predictions, right? So these are my points. I would like to open up the floor if people have questions or comments or… Yes. And then after we collect, we go back to the panel. Anybody else?

Audience:
Yes, so I’m gonna ask you a really tough question, but it’s more an attempt to make a general point about how messed up our dialogue about AI is rather than focusing on you, because I think you’re mapping out there of this ecosystem was actually a pretty interesting contribution and worthwhile. But you open your paper by saying, invoking the invention of the printing press, right? Now, can you use your imagination and try to project for me what would have happened if the authorities and the public in 1452 had decided they were going to regulate printing? And what do you think would have resulted from that?

Moderator:
Do we have any other questions in the room? Because otherwise we can go back to the panel. We can go in reverse order. We have like nine minutes left, so that would be like three minutes max. Sure, so to your question.

Kazim Rizvi:
So I think that’s a good point. So I think that’s a good point. So I think that’s a good point. So in this paper, we haven’t suggested that AI should be regulated, right? What we are saying is that, look, there are certain harms associated with the use of AI, which we have to be careful of. And we have to work towards developing some frameworks and principles around these harms, which we’ve identified. And I think that’s a good point. And I think that’s a good point. So, you know, across the globe, AI is regulated as it is, right? We’ve not taken a stand that, look, you have to sort of come up with very strong regulations to, you know, sort of really bucketed into different kind of, you know, technologies which should or should not be used. But maybe in the next 10, 15 years as the usage grows, we may see that, you know, we may see that, you know, we may see that AI should be regulated as it is. So I think it’s very clear that these are principles which will help in improving the effectiveness of the technology. I mean, the same argument goes for fire. The same argument we can apply for fire as well. So, you know, we may not have seen what we see today. But eventually it was. So the same argument applies to this, that an AI has been around for a few decades now. It’s not like the technology is very new of late. It’s been there for a while. But we are not suggesting that, look, put very strict or very hard regulations to begin with. What we’re suggesting is, look, move slowly, but watch out for harms as they take place. So, you know, I think it’s very clear that, you know, we may see that, you know, we may see that, you know, we may see that AI is not a solution. I mean, look at the industry, look at civil society. are a means to also put that discussion into context that look, we need to move towards more responsible deployment of AI. And what that means, even we don’t know. I mean, we are all studying this. A lot of scholars globally are trying to figure out what is really responsible AI, as much as responsible printing press or responsible use of fire would be.

Moderator:
Just quickly coming in on your… Yeah, very quickly coming in your points.

Kamesh Shekar:
There’s too many things to discuss in whatever you have said. We can take it offline too. On the very first thing on impact population and end users, so basically the paper does that, where it also says that as we as end users and impact population use such technologies, how should we responsibly use it? So there are certain principles and there are certain operationalization things that we talk there. So second thing that on your question on where is this, the life cycle comes from is derived from NIST and OECD and et cetera and stuff. In addition to that, we have added some aspects of our own. And we have also validated how we think that is important within the paper. Third thing about, if you could repeat your third point, or it was something on the exclusion and stuff.

Moderator:
Well, I don’t think we have time. I think we should take it to the break. We can go back to Jamie for like last points and then we can wrap it up, I think. Jamie, are you still there? Yes. Yes, I’m still here. Thank you. And thank you very much, Danielle, for your comments.

Jamie Stewart:
Just to, I will be very brief in these because I obviously don’t have a lot of chance to go over them in detail. And you brought up some really, really good points. Just to say we had a very- comprehensive list of threats that we asked about experiences at both the personal and the organizational level and we also had some open-ended information so people could add more. There is a lot going to be a lot more information about that in the report. You also asked about anecdotes in terms of actors. This is a very sensitive issue and I think you know in terms of diplomacy and what that looks like. We did ask about perpetrators and who they think the perpetrators are and obviously we have to be considered that that is perceptual as I mentioned right at the very beginning and there are a range of state and non-state actors and as I said most of them in terms of the stories were very coordinated and so that is when we’re experiencing but on social media and what those attacks look like obviously they’re sometimes very difficult to trace but we can definitely trace some of the surveillance software as it was used and that’s very relevant to the South Asian context. I did want to very briefly end on first off say that yes your recommendation your point about the recommendations being more concrete is very well taken and we’re working with the civil society right now in order to co-create those more but the last thing and what I wanted to end on was your very important point and I agree with this entirely the misuse of regulation and law and policy against human rights defenders or against journalists or advocates and those who speak out. I think this is an incredibly important point that is very nuanced and the ones that I really want to highlight and pay attention to are those ones that are considered to be anti-terrorism and cyber that the more generic cyber crime laws that really put those who are potentially speaking out in a place where they could be legislated against. And I think that’s something that we need to, we need to very strongly consider when we’re taking kind of more global and international regulatory frameworks, because they can do the opposite. And just having cybersecurity policy in place does not mean that it’s in place in a way that’s protective against human rights. So thank you for bringing that up and that’s well and thank you so much, everybody.

Moderator:
Thanks, Jamie. Thanks everybody who was on this panel and for participating. Also thanks to Nanette, even though she’s no longer here. Yes, I think we can go to the break. Everybody please give a hand to the panelists. And what time are we back? Like what time do we reconvene? We should come back at 12.35. 12.35, 1.35. 1.35. I hope. If we come back at 12.40, I think we can. 1.40. Yeah, yeah. It’s okay, the jet lag. It’s fine. Okay, we come back at 1.40 everybody. And that goes for people online as well. Okay.

Audience

Speech speed

179 words per minute

Speech length

256 words

Speech time

86 secs

Berna Akcali Gur

Speech speed

147 words per minute

Speech length

2264 words

Speech time

921 secs

Jamie Stewart

Speech speed

162 words per minute

Speech length

2664 words

Speech time

984 secs

Kamesh Shekar

Speech speed

180 words per minute

Speech length

1281 words

Speech time

426 secs

Kazim Rizvi

Speech speed

210 words per minute

Speech length

1607 words

Speech time

458 secs

Kimberley Anastasio

Speech speed

171 words per minute

Speech length

2039 words

Speech time

717 secs

Moderator

Speech speed

140 words per minute

Speech length

4886 words

Speech time

2101 secs

Nanette Levinson

Speech speed

141 words per minute

Speech length

1476 words

Speech time

629 secs

Vagisha Srivastava

Speech speed

171 words per minute

Speech length

2617 words

Speech time

916 secs

Yik Chan Chin

Speech speed

157 words per minute

Speech length

2398 words

Speech time

919 secs

Connecting open code with policymakers to development | IGF 2023 WS #500

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Helani Galpaya

Accessing timely and up-to-date data for development objectives presents a significant challenge in developing countries. It can take up to three years to obtain data after a census, leading to outdated and insufficient data. This lag in data availability hampers accurate planning and decision-making as population and migration patterns change over time. Additionally, government-produced datasets are often inaccessible to external actors like civil society and the private sector. This lack of data transparency and inclusivity limits comprehensive and integrated data analysis.

Another issue is the lack of standardisation in metadata across sectors, such as telecom and healthcare, especially in developing countries. This lack of standardisation creates challenges in data handling and cleaning. The absence of interoperability standards in healthcare sectors further complicates data utilisation and analysis.

Cross-border data sharing also faces challenges due to the absence of standards. This absence hinders the secure and efficient exchange of data, hindering international collaboration and partnerships. Developing more standards for cross-border data sharing is crucial for overcoming these challenges.

Working with unstructured data also poses challenges, particularly when it comes to fact-checking. There is a scarcity of credible sources, especially in non-English languages, making it difficult to identify misinformation and disinformation. Access to credible data from government sources and other reliable sources is essential, but often limited.

Efficient policy measures and rules are necessary to govern data usage while preserving privacy. GDPR mandates user consent for sharing personal data, highlighting the importance of differentiating between sharing weather data and personal data based on different levels of privacy violation.

The usage of unstructured data by insurance companies to influence coverage can have negative implications, potentially resulting in unfair risk classification and impacting coverage options. Ensuring fairness and equality in data usage within the insurance industry is crucial.

To address these challenges, building in-house capabilities and utilising open-source communities for government systems is recommended. Sri Lanka’s success in utilising its vibrant open-source community and building in-house capabilities for government architecture exemplifies the benefits of this approach.

The process of data sharing is hindered by the incentives to hoard data, as it is seen as a source of power. The high transaction costs associated with data sharing, due to capacity differences, also pose challenges. However, successful data partnerships that involve a middle broker have proven effective, emphasising the need for sustainable systems and case-by-case incentives for data sharing.

The evolving definition of privacy is an important consideration, as the ability to gather information on individuals has surpassed the need to solely protect their personal data. This calls for a broader understanding of digital rights and privacy protection.

In conclusion, accessing timely and up-to-date data for development objectives is a significant challenge in developing countries. Government-produced datasets are often inaccessible, and there is a lack of standardisation in metadata across sectors. The absence of standards also hampers cross-border data sharing. Working with unstructured data and fact-checking face challenges due to the scarcity of credible sources. Policy measures are necessary to govern data usage while protecting privacy. Building in-house capabilities and utilising open-source communities are recommended for government systems. The government procurement system may need revisions to promote participation from local companies and open-source solutions. Data sharing requires sustainable systems and incentives. The definition of privacy has evolved to encompass broader digital rights and privacy protection.

Audience

During the discussion, the speakers explored various aspects of open source, highlighting its benefits and concerns. One argument suggested incentivising entities to share data as a way to counteract data hoarding for competitive advantage. It was noted that certain organisations hoard data as a strategy to gain a competitive edge, but this practice hampers the accessibility and availability of data for others. Creating incentives for entities to share data, therefore, was emphasised as a vital step in promoting data openness and collaboration.

Conversely, the potential negative effects of open source were also discussed. The speakers raised concerns regarding the need to verify open source code and adhere to procurement laws. They specifically mentioned the French procurement law, expressing apprehensions about the ability to effectively verify open source code and ensure compliance with regulations. These concerns highlight the necessity for thorough scrutiny and robust governance measures when relying on open source solutions.

Building trust in open source was another significant argument put forth. In Nepal, for instance, there was a lack of trust in open source, hindering its widespread adoption across different sectors. The speakers stressed the importance of establishing mechanisms that enable the verification of open source code, ensuring its reliability and security to build trust among stakeholders. They also emphasised the need for capacity building to enhance knowledge and expertise required for verifying and utilising open source code effectively.

Overall, the sentiment surrounding the discussion varied. There was a negative sentiment towards data hoarding as a strategy for competitive advantage due to its restriction of data availability and accessibility. The potential adverse effects of open source, such as the need to verify code and comply with regulations, were also viewed negatively because of the associated challenges. However, there was a neutral sentiment towards building trust in open source and recognising the necessity for capacity building to fully leverage its benefits.

Mike Linksvayer

Mike Linksvayer, the Vice President of developer policy at GitHub, is a strong advocate for the connection between open source technology and policy work. He firmly believes that open source plays a crucial role in making the world a better place by supporting the measurement and information of policy makers about developments in the open source community. Linksvayer expresses enthusiasm about the potential of sharing aggregate data to address privacy concerns. He sees promise in technologies like confidential computing and differential privacy for data privacy and recognises the importance of balancing privacy considerations while still making open source AI models beneficial to society.

Mike Linksvayer emphasises the crucial role of archiving in software preservation and appreciates the contributions of Software Heritage in this field. He highlights the separation of preservation and making data openly available. Linksvayer sees coding as unstructured data and acknowledges the importance of data collection in research on programming trends and cybersecurity. Collaboration in software development is facilitated by platforms like Github, which provide APIs and open all events feed, enabling the sharing of aggregate data. Linksvayer believes that digital public goods, including software, data, and AI models, can be effective tools for development and sovereignty, addressing various Sustainable Development Goals (SDGs).

Promoting and supporting open source initiatives is essential, according to Linksvayer, as they drive job creation and economic growth. He cites a study commissioned by the European Commission estimating that open source contributes between €65 to €95 billion to the EU economy annually. Linksvayer also stresses the importance of cybersecurity in protecting open source code and advocates for coordinated action and investment from stakeholders, including governments.

In summary, Mike Linksvayer’s advocacy for open source technology and its connection to policy work underscores the potential for positive global change. He emphasizes the importance of sharing aggregate data, advancements in data privacy technologies, and the promotion of digital public goods. Linksvayer also highlights the economic benefits of open source and the critical need for investment in cybersecurity.

Cynthia Lo

During the discussion, several key points were highlighted by the speakers. Firstly, Software Heritage was praised for its commendable efforts in software preservation. It was mentioned that the organization is doing an excellent job in this area, but there is consensus that greater investment is needed to further enhance software preservation. This recognition emphasizes the importance of preserving software as an essential component of data preservation.

Another significant point made during the discussion was the support for assembling data into specific aggregated forms based on economies. This approach was positively received, as it provides a large set of data that can be analyzed and utilized more effectively. The availability of aggregated data based on economies allows for better understanding and decision-making in various sectors, such as the public and social sectors. This aligns with SDG 9: Industry, Innovation and Infrastructure, which promotes the development of reliable and sustainable data management practices.

One noteworthy aspect discussed by Cynthia Lo was the need to safeguard user data while ensuring privacy and security. Lo mentioned the Open Terms Archive as a digital public good that records each version of a specific term. This highlights the importance of maintaining data integrity and transparency. The neutral sentiment surrounding this argument suggests a balanced consideration of the potential risks associated with user data and the need to protect user privacy.

Furthermore, the discussion touched upon the role of the private sector in providing secure data while ensuring privacy. Cynthia Lo raised the question of how public and private sectors can collaborate to release wide data sets that guarantee both privacy and data security. This consideration reflects the growing importance of data security in the digital age and the need for collaboration between different stakeholders to address this challenge. SDG 9: Industry, Innovation and Infrastructure is again relevant here, as it aims to promote sustainable development through the improvement of data security practices.

In conclusion, the discussion shed light on various aspects related to data preservation, aggregation of data, user data safeguarding, and the role of the private sector in ensuring data security. The acknowledgement of Software Heritage’s efforts emphasizes the importance of investing in software preservation. The support for assembling data into specific aggregated forms based on economies highlights the potential benefits of such an approach. The focus on safeguarding user data and ensuring privacy demonstrates the need to address this crucial issue. Lastly, the call for collaboration between the public and private sectors to release wide data sets while ensuring data security recognizes the shared responsibility for protecting data in the digital age.

Henri Verdier

In this comprehensive discussion on data, software, and government practices, several significant points are raised. One argument put forth is that valuable data can be found in the private sector, and there is a growing consensus in Europe about the need to promote knowledge and support research. The adoption of the Data Sharing and Access (DSA) policy serves as evidence of this, as it provides a specific mechanism for public research to access private data.

Furthermore, it is argued that certain data should be considered too important to remain private. The example given is understanding the transport industry system, which requires data from various transport modes and is in the interest of everyone. The French government is working on what is called ‘data of general interest’ or ‘Données d’intérêt général’ to address this issue.

The discussion also highlights the importance of data sharing and rejects the idea of waiting for perfect standardization. It is noted that delaying data sharing until perfect standardization and good metadata are achieved would hinder progress. Instead, it is suggested that raw data should be published without waiting for perfection. This approach allows for timely access and utilization of data, with the understanding that standardization and optimization can be addressed subsequently.

The protection of data privacy, consent, and the challenges of anonymizing personal data are emphasized. The European General Data Protection Regulation (GDPR) is mentioned as an example of legal requirements that mandate user consent for personal data handling. It is also noted that anonymization of personal data is not foolproof, and at some point, someone can potentially identify individuals despite anonymization attempts.

Open source software is advocated for government use due to its cost-effectiveness, enhanced security, and contribution to democracy. France has a history of utilizing open source software within the public sector, and there are laws mandating that every software developed or financed by the government must be open source. The benefits of open source software align with the principles of transparency, collaboration, and accessibility.

The discussion also addresses the need for skilled individuals in government roles. It is argued that attracting talented individuals can be achieved through offering a mission and autonomy, rather than relying solely on high salaries. The bureaucratic processes of government organizations are criticized as complex and unappealing to skilled workers, indicating a need for reform to attract and retain talent.

In conclusion, this discussion on data, software, and government practices emphasizes the importance of a collaborative and transparent approach. It highlights the value of data in both the private and public sectors, as well as the need for data sharing, open source software, and data privacy protection. The inclusion of skilled individuals in government roles and the promotion of a substantial mission and autonomy are also seen as essential for effective governance. Ultimately, this comprehensive overview underscores the significance of responsible data and software practices in fostering innovation and safeguarding individual rights.

Session transcript

Cynthia Lo:
us this morning. It’s a little bit early for individuals on site. Today we’re talking on a workshop on connecting open code with policymakers to development. And an agenda that we have here today, we’re going to go through a round of introductions and then overview of connecting open code with policymakers. Then we’ll move directly to our panel discussion and then a Q&A. Feel free to ask questions also to our online participants as well. I’m going to hand it over first to Mike Linksvayer to do an introduction. One second. We have slight technical difficulties. And so one moment while I fix that. I’m going to hand it over

Mike Linksvayer:
to Halani and never mind, we have Mike now. Hey, thanks a lot for resolving those technical difficulties. I’m sorry I can’t be there in person. I’m Mike Linksvayer. I’m the VP of developer policy here at GitHub. Former developer myself who’s now been doing policy work focused on making the world better for developers and helping developers make the world a better place. Kind of open source is a big part of the way that happens. And I’m really excited about measuring it and informing policymakers about what’s going on. And so I’m really excited about this panel. Great. And I’ll pass it over to our speakers here, Halani and Anri.

Helani Galpaya:
Is there a specific question? Just to introduce yourself. Okay. I’m Halani Galpaya. I’m the CEO of LearnAsia. It’s a think tank that works across the Asia-Pacific on broadly infrastructure, regulation and policy challenges, but with a huge focus on digital policy. Thank you.

Henri Verdier:
Hello. Good morning. I’m Henri Verdier, French ambassador for digital affairs. Just to mention that I’m not a career diplomat, I was a French entrepreneur a long time ago and I used to be the state CIO for France.

Cynthia Lo:
Great. Thank you. So we’re going to move directly to our panel talk. And to start, let’s talk a little bit more about challenges from unmet data needs. So let’s start with Helani here. What are some of the challenges that you’ve seen over the years on unmet data needs?

Helani Galpaya:
I mean, from a development perspective, understanding where we are in whatever those development objectives, that’s the starting point of any kind of development. And that’s a problem if there is no data. And particularly when it comes to developing countries, which is where I come from, this is a particular challenge, right? So traditionally we’ve relied on government-produced data sets, take for example the census. Every 10 years it’s supposed to happen. And low levels of digitization has traditionally meant it takes about three years after the census to actually get some data out in many countries, by which time the population has changed, the migration patterns have changed, and so on. But we know now there are obviously lots of other proxy data sets that we can use. But the timeliness is one concept. that we worry about in development because the data is slow to come by, even when it is available. The second unmet need is if you’re outside of government, is the availability of data to actors outside government. And frankly, within government, sometimes the data that’s collected by one department or ministry is not even available to others, right? So there’s a very low level of data access possible within government, and certainly for civil society and private sector outside government to access data. Many governments have signed on to open data, charters and all of those things, but really the data that they put out is sometimes not what most people need. It’s not usually in machine readable format, so you spend enormous amounts of time digitizing it and data-fying it. So these are sort of basic challenges and basically, I mean, from the government point of view, governance and regulation in particular, the oxygen that feeds that engine is data because there’s a huge data asymmetry between the government and the regulators versus the governed entity. Take telecom operators, for example, right? How are they doing? They have a lot more information about their operations than the regulators or the governing party would. So there’s really multiple data challenges that we have, and increasingly, the conversation is that the private sector data can act as a proxy to inform development, but negotiating that and accessing that is particularly hard. So there’s multiple data challenges in developing countries, particularly from our point of view as a research organization sitting outside government and outside private sector.

Henri Verdier:
Thank you for the question. 15 years ago or something like this, government understood. that open government data was very important. And together, we did work a lot to open our data. And then maybe we’d commit later our source code. And we learned some lessons, that those data could create much more value if more people can use it, that it was a matter of transparency, democracy, but also economic development, efficiency, and maybe citizenship. And more and more, we understood that government don’t have the monopoly of general interest. And some very important data are in the private sector. So it’s time, probably, and it’s the moment to start thinking deeply, even philosophically, about the private sector data. In Europe, first, there is a growing consensus that, first, we need to help research and to promote knowledge. There are a lot of topics where we have to know. I can speak about disinformation, some impact of social networks, but also climate change, or some important topics. We need more knowledge. And for example, if you look at the DSA that we did adopt last year, we do organize a specific access to private data for public research. Of course, I know that there are important issues, privacy, intellectual property, sometimes security, because if you share everything, you can allow reverse engineering and hacking, et cetera. But we can fix it. And for example, there is an important field of research regarding confidential computing. You can use the data without taking the data. So this is a growing consensus. And probably, we will have, collectively, this kind of consensus. the international community to make the public research stronger and to organize ourselves to be able to understand important mechanisms. But then there are also other actors that need access to those data. And for France, for example, first we do encourage the private sector to be more responsible. Let’s think, for example, about the transport industry. If you don’t have all the data, you have nothing. If you don’t have buses and taxis and personal cars and motorcycles and metro and train, you don’t understand the system and you cannot take good decisions. And this is in the interest of everyone, the public decision-maker, private actors. Everyone needs a good comprehension of a good knowledge of the system itself. So we do encourage cooperation, sharing the data, et cetera. Then we think that we can go further. Maybe you know the French economy Nobel Prize, Jean Tirole, he did publish a lot about the economy as a common good. And we consider that it’s time to conceive some incentive to make the private sector share some important data. And this year, in the French government, we are starting to work deeply on what we call data of general interest, the Données d’intérêt général. Because as I said, government don’t have the monopoly of general interest. And some data should be considered as too important to be allowed to remain private. Of course, this is complex because we need a legal framework to give a status to this kind of data. But really, we did open the case to create a status for some very important, impactful data and to decide that… And those data has to be open, even if they come from the private sector.

Cynthia Lo:
Perfect. I do also wonder, you mentioned one thing about policy and standards. A lot of metadata has very clear standards in financial markets, in healthcare, for instance. I’m curious to know, are there any unmet needs within the standards of metadata? It’s currently governed quite well, but is there anything, is there a certain standard that isn’t out there that could be?

Helani Galpaya:
I mean, the data we deal with, my data scientists deal with, telecom, mobile telecom network, big data, basically call detail records, for example, that we get from base stations, trillions of call detail records. These are not standardized under any means, right? In fact, the team spent four to six months cleaning up the data, because across, when you get data from four to six telecom operators, they’re actually not standardized, how the numbers are. So they’re not interoperability standards. There are many, many sectors where there isn’t interoperability standards. And of course, some of the coolest stuff that comes out is from unstructured data anyway, like social media data and so on. I think the financial sector has traditionally been well forward in this, but many other sectors haven’t. Health, I think, in developing countries is less developed, interoperability standards. And certainly for cross-border data sharing, this is a fundamental problem, right? Like when you look at taxation data, all of that, there’s a lot more work that needs to be done, I think, particularly when it comes to developing economies.

Henri Verdier:
Yes. I did join the French government 10 years ago to lead the open data policy. The lesson learned is that… If you wait for a perfect standardization and a good metadata, you will never do anything. When I did join the French government, we wanted to index every dataset through an index that was conceived during the Middle Age for the National Archives with 10,000 words. So it was quite impossible to publish a dataset because you had to come back to Philippe Lebel to decide where you did. So I take from the open data movement the idea that you share your raw data as they are and don’t wait. But it doesn’t mean that standards don’t matter. Of course they do. But let’s start by publishing. The second lesson is that maybe the apification process is more important than the indexation on metadata themselves. So first, during maybe five years, you did publish everything. But it was not always very useful, especially for data that has to be refreshed very frequently. And you need the last data and not just a data. So for this, we did then take three years to organize a proper API ecosystem. And again, people told me, then first you have to conceive a good architecture of the API system. I said, no, let’s build API. And then we will optimize the API system. So my lesson is that, and my personal experience, don’t wait for a perfect standardization because you will never accede to this goal. This is a moving target. So don’t wait.

Cynthia Lo:
Thank you. And I think that brings us to our next point quite well. You both highlighted this as well on private sector data for development purposes. And I know Mike also has some thoughts on that. But I’d love to know. You mentioned on private sector data, a lot of times it’s a little unstructured, but that’s interesting because you have, it’s wider. You can take a look and analyze that in an easier way. Tell us a little bit more on that, what has been a surprising find? On private sector data? Yes, and some unstructured data.

Helani Galpaya:
Some unstructured data that we work with include, for let’s say for misinformation and disinformation identification, automatic identification of mis and disinformation that spread across platforms in languages outside of English in particular. And there, I think, well, there’s sort of two types of problems. One is just the low levels of data. So I mean, even assuming you have all the language resources, like a language corpus that is needed to identify this on, you know, natural language processing, you at some point you’re going to need a fact base to check against, right? So there, the unstructured data is, well, structured or unstructured data comes from government resources and maybe other sort of credible sources, right? So you’re dealing with two types of data. To fact check numbers, you’re looking at usually trying to find government, and to fact check other things, you’re looking at reports and so on. And there’s a serious lack of data. So for example, if you look at like the big popular English language models, they are trained on millions of articles. We tried this in Bangladesh and Sri Lanka to fact check. We’re down to 3,000 articles that are credible, you know, sort of data sources that we can use to fact check against unstructured, you know. So we are working with a very, very limited universe of credible data that’s actually out there because there’s very little out there so I think that’s for us the biggest challenge.

Henri Verdier:
Sorry, it’s a very complex question. First I was thinking thatcompletely unstructured data are very rare because usually someone did produce the data and did pay something. So a data set is the answer to one certain question but usually it’s not your question. So they have a structure usually. Of course in the world of Internet of Things and some source and you have more and more quite not structured data but if you do observe we are living in a world of data with purpose so they have a structure. So the question again is to think about interoperability and to build bridges. One other question with unstructured data or with a minimum structure is that if you want to share the data to give them as many as more value as they can have you also have to protect other important securities like again privacy but not just privacy like interoperability and if you don’t really understand what is within the data you are not sure that you are protecting all the securities you have to protect. That’s why I pay more and more attention to the field of research as I said of confidential computing. We have to learn to work with the data to train AI model to ask question. For example let’s in France as you know we have an ancient and very structured social security system. So there is one database the social security with every prescription that every French doctor made during the last 20 years. Can you imagine this? 70 million people, every prescriptions made by a doctor during 20 years. And then we make a statistic archive. So you take 1% and we. Here, of course, you have a lot of knowledge and science. You can discover new drugs because you can discover that, I don’t know, someone that had a lot of head ash at the age of 20 don’t have Alzheimer’s 40 years later. And you can discover a new principle of some drug and a lot of things like this. But you cannot just open this kind of data because this is pure privacy. This is my health and your health. But you can organize a technical strategy to accede to this data without showing them. And if you do this, you can control a bit the people that are using this data. And if they don’t respect some laws or principles, you can disconnect them. So this is probably an important field. So again, I’m not looking for a perfect standardization. But we can organize the ecosystem of how to access to the data, when, why, and give another relation between knowledge and data.

Helani Galpaya:
And I agree with the minister. Some of the solutions are technical. We’ve certainly worked with differential privacy methods when we use call data records to still have the data be as usable to inform policy, but without revealing where an individual might actually be or what that person’s number is and all of that. The other part of the solution, I think, is policy, is to have some kind of governing structure to make sure that we are able to use it and preserving the privacy and having some sort of rules around what that. users, what the data is used for, like in the health care system, that insurance companies cannot use it and then drop, you know, private insurance companies cannot drop coverage, because they have so much more information about a set of users. Even if they’re not individually identifiable, once you’re in an insurance pool, you can identify that this is a much higher risk. So there’s sort of, you know, policy as well as technical solutions there.

Cynthia Lo:
I think on the privacy part, I’m very curious to also hear from Mike on what are your thoughts on privacy and private sector data. I’d love to know your thoughts on that, too, or anything to add.

Mike Linksvayer:
Okay, yeah. Well, I first would say that I should have said in my introduction and shouldn’t assume people know what GitHub is, where I work. It’s the largest platform where software developers from around the world come to develop software collaboratively, a lot of it open source. And there are a lot of themes I’m hearing that you can, I mean, software development is kind of a very specific thing, but I think there are a lot of themes we’ve talked about on structured data, APIs, and privacy that maybe I can paint a little bit of a picture about how it works with data about code developments and that the code that programmers are writing is data itself. And indeed, you could think of it as unstructured data. It’s a text file, but it also, each programming language has its own structure because it needs to be able to parse the individual statements. So it’s really a matter of how much work do you want to do and what are the questions that you have about, for example, software developments. And then APIs is another aspect you can, if you want to crawl all of the code, we call our repositories is where a particular project on GitHub or similar platforms are collaborated on. If you want to crawl all the code in the world, that will take you a long time and be very resource intensive. However, GitHub and similar platforms also make APIs available. So I think that’s another kind of common theme that we can look at how exactly that looks with code where you can both kind of do queries to ask questions about kinds of projects that you’re interested in, or you can kind of try to ingest all of the activity as it comes out because GitHub has a very open kind of everything, all events feed, but that also is extremely expensive to do. So as a, and some researchers who do kind of research around programming trends, I don’t know, cybersecurity, there’s a bunch of different kind of research areas that you can look at GitHub data to do. A lot of them spend a lot of their time kind of gathering data before they can even answer or kind of validate whether they’re asking the right questions. So one approach to that and dealing with privacy is publishing aggregate data that will be helpful for some use cases. And that’s what we’ve done with a new kind of initiative we have at GitHub. We’re calling the innovation graph, which is basically a longitudinal data per country, per economy. roughly country basis at various kinds of activity. And so that, and we did it particularly to inform policy makers and international development practitioners who want to understand, use that data to understand things like digital readiness within their sphere of influence. And to, we were able to, publishing aggregate data, kind of satisfy some of these use cases or at least allow us people to explore the aggregate data to figure out what they want to make an investment in, you know, crawling more. It also sort of neatly deals with the fundamental privacy questions that you don’t want to identify, you know, individuals and things like that. So you can do that by kind of, you know, thresholding a certain number of people have to be doing an activity within a country in order to report aggregate statistics on that. So that covers a lot of different themes I think we’ve heard covered there. And I think there’s a ton of promise in, you know, a range of technologies like confidential computing, differential privacy. And I’m excited about them all because developers are building them and a lot of the research slash the R&D is open source. But simple, I guess I’ll just highlight here that, you know, very simple approach, but kind of very low tech approach of, as a first, you know, step at sharing data can be just sharing aggregate data that doesn’t have any privacy concerns can be, you know, that’s, it’s actually very much kind of to Henri’s point about like sharing data before you do all of the standards work because that will. you might be waiting forever. Also sharing aggregate data is a way to kind of take that first step, share data that’s gonna be useful to a range of stakeholders, and then work on the harder part that might be pending more advanced technology to deal with the harder issues.

Henri Verdier:
Oh yeah, please. A small answer. First, we know and cherish GitHub. When I was a state CIO, France was the second public contributor, government contributor in GitHub. And I don’t know if you know, but by French law, every software that the government develops has to be open source and free software. So open source and freely reusable. And more than this, every time that the government use an algorithm to take a decision, he has to publish the source code, but also to tell to the citizens that we are using algorithms, and to be able to explain in a simple word how it works. So that’s an important and coherent policy. And regarding the structure or unstructured data, what I learned from my open data experience, as I said, the first duty is to share data as they are. And then some people will structure. And if we think again about GitHub, so as I said, we cherish GitHub, but we work a lot within GitHub. And then, for example, in France, I don’t know if you know the Software Heritage Project. So here, some researchers from the INRIA decided to build the biggest possible archive of every software. So taking GitHub, but also some dead forges like the Google one. And they are working harder to structure it now. to be able to track the genesis of a soft. So they are working. But we did allow this because we did publish unstructured software. And then some people can continue. And maybe someone will do better. I don’t know. But we will have a variety of experiences. So my lesson is to separate, first publish, and then structure. And you can have a diversity of attempts to structure if you have a common ground of raw data or software.

Cynthia Lo:
I think you mentioned a really interesting. Sorry, please, Mike.

Mike Linksvayer:
Yeah, I just wanted to add to that. Thanks for cherishing GitHub. I definitely cherish Software Heritage. And really, archiving is almost a third part that is also extremely important and, I think, under-invested in. So I think in the software preservation space, Software Heritage is doing an amazing job. But I think that’s preservation of data is something that can be decoupled from the making available unstructured. But I think it’s extremely important to think about.

Cynthia Lo:
Yeah, absolutely. I think we actually have a slide here as well on the innovation graph that Mike had mentioned. And I also saw in the audience here, we have Malakumar, who helped on the standardized metric research because we wanted to understand exactly what type of data would help and what type of data would public sector or the social sector require. And as you mentioned, we have the API, which is that large set of data that Anri mentioned first. And then now we’ve gathered all of the data sets into specific aggregated data based on economies, just in the pattern that Anri had mentioned. I’m not sure, Mike, if you want to mention anything on there. I think you also may be able to share your screen if you’d like. But also, huge thank you to Malakumar, who led that standardized research. metrics research, who’s joining us online.

Mike Linksvayer:
Yeah, I can share my screen briefly, if it would be useful. I’m not sure what the, if folks will be able to see in the room. So maybe I’ll share and you can tell me whether you can actually see it in a useful way. Okay. Can you see, see anything on the screen? Yes. Yes. Okay, great. I think I’m sharing a window that has the page for France and the innovation graph. So this is just to show that we have a bunch of data on a per economy, on a per economy basis. Some of them are fairly technical. Git pushes is basically code uploads to GitHub and you can see that summer vacation actually happens. And repositories, as I was saying, this is the kind of unit of a project on GitHub and similar platforms have using the same concept. Developers, those are people actually writing the code or in some cases doing design around software project. Organizations, which is kind of a larger unit of organizing projects on GitHub that sometimes correspond to a real world organization, sometimes do not. Programming languages, this can be very useful for thinking about skilling within a country. And licenses are about copyright. And then probably, oh, and then topics are, this is currently very unstructured. Basically maintainers on GitHub can assign keywords to their projects, but this can also be, so it’s kind of very noisy data, but can be. helpful in, you know, really diving into, like, identifying a set of projects that you want to study more. And one thing that I’m excited about, so you can tag with any kind of text. So even going forward, people might tag that, you know, your project is relevant to a particular sustainable development goal. And so you’ll be able to kind of navigate the tags in that way or the topics in that way. And finally, perhaps most interesting and new is this kind of trade flow diagram. You can see economies that France is collaborating with that developers are kind of sending code back and forth. So you see U.S., Germany, Great Britain, Switzerland. It’s unsurprising that those are some of the top ones. You can also combine all the member states. And this is kind of a first release. There’s obviously a lot of other exciting analysis that can be done. The data is actually open in the repository. You can see the data here. And, you know, at the end of the day, data can be extremely boring. This is literally a CSV file. But that boringness is fantastic because it means that, you know, you can use your tool of choice, whether it’s a spreadsheet, a Jupyter notebook, or something fancier to analyze the data. And then I’ll just show really quick the reports that Cynthia and Mala mentioned, worked on. And that kind of really drove our requirements for this project, looking at what kinds of data about software development would actually be useful for international development, public policy and economics practitioners. So did a lot of, you know, discussions with entities that are part of the data development partnership, for example, to help design this. And then I also pulled up Software Heritage because I’m a big, big fan of it. They have a page on here that I can’t find immediately kind of showing all the different. projects that they indexed. But I cherish that, too. So anyway, I’ll stop sharing. If people later have questions about a particular country or metric, happy to share again.

Henri Verdier:
Yes, thank you. Very, very promising. We did agree, apparently, that the best policy is to first publish and think later. But we also have to think and to understand. I observe that we are more and more living in a world of interdependent, free, and open source software. And there are dependencies and security issues. If we don’t understand a bit the very structure of the soft ecosystem we are living in, we have to face important concerns. We can remember log4g, for example. We can observe that sometimes when we discover a security failure, because we don’t know the story of the evolution of the code, the forks, et cetera, we are not able to correct everything because we don’t have a proper vision of the history and the evolution of the code. And probably that’s a very important new frontier. We have to build new tools and new approaches to understand to control this very complex system of softs. Do you agree?

Helani Galpaya:
Yeah, I completely agree. I think Sri Lanka, just one example, has a really vibrant open source community. So this kind of data, if they are using GitHub primarily, could be really interesting to understand the evolution of that community as one thing. But just on the many countries are technology takers and product takers when it comes to e-government systems, so don’t have the luxury of saying everything will be open. They’re buying software from big companies. which will not certainly make the code open, right? Not even APIs, a very close, tight licensed system is what they’re buying. And I think as countries go along that technology maturity road, like Sri Lanka at some point came to the point where there was enough capacity with the CTO, with the government agency who was able to say, okay, we will build some of this in-house, I will use the open source community who’s working around the world to build some of these tools to set up the basic government architecture. But that takes a bit of time, I think, to get to this stage because the easiest thing is to get some donor money and to do a procurement of a closed system. And that’s really problematic, yeah.

Henri Verdier:
Small comment. When I was in charge, the budget for buying software in France was four billion euros a year. Half of it was consumer products, like, I don’t know, Windows. So for this, of course, we cannot negotiate. But half of this, two billion, were proper, yes, back-end system. And here, you can decide by law that in the procurement, the software has to be open. And we are trying to, we tried to do this, and now that’s quite a standard for French procurements.

Cynthia Lo:
I have many thoughts on that, because I’m very curious, we’ve been talking a lot during IGF about digital public goods and how that could be discovered a little bit more. But that is maybe a little bit off course, but maybe think a little bit about that, I think.

Mike Linksvayer:
Well, actually, don’t, if I could interrupt. It’s actually not off course in a way, at least maybe I can tie it in. I think the, and maybe I’ll share my screen again really, really quick. I mean, this might have been something we’re planning to talk about later, but I think it’s a good opportunity to actually. So this that I’m sharing now is the digital public goods registry, which. in parts which digital public goods could be software, could be data, could be AI models, could be a lot of different things, but it’s mostly software. In fact, you can see the breakdown here between software data and content. And you can see that they’re all tagged in relation to a particular SDG. Part of the, I mean, a big part of the motivation here is we’re gonna find and share solutions, you know, to progress on the various SDGs. The same kind of concept can be useful to kind of just basically curation of information about open projects is its own data project in a way, and can be very helpful in not reinventing the wheel, finding that, you know, a government or civil society institution is already, you know, serving a particular need in that software was developed in country A and people in country B can maybe take it and use it or customize it. And so they have a little bit more, I guess, sovereignty or autonomy to use those words that are quite popular now. See, and the way it’s really tied together, I think, is that the, yes, these are tools that can be helpful for development, for SDG attainment, for sovereignty, but it’s also a data projects kind of doing this kind of organization and, you know, which is its own effort. And I’ll stop sharing now.

Cynthia Lo:
No, thank you, Mike. I did also wanna highlight the Open Terms Archive, which I believe is a digital public good incubated with the government of France. Linking back to you mentioned on security. having ways to publicly record every version of a specific term, and I think it does tie in very well with security, and I was a little curious to go to the next slide about our topic on data, privacy and consent, and then also, widely, security. Would love to know some of your thoughts on how to really safeguard all the data that impacts the users. How should public or private sector provide data that is secure and ensures privacy? It is a big question, and there’s no perfect answer, of course, but another way to think about it is if there’s one suggestion for private sector data that are thinking of releasing data sets, if they release a wide set, is there anything they should keep in mind before doing so?

Henri Verdier:
Yes, that’s a very complex question, and there is no silver bullet. In Europe, we started with principles, so the GDPR, which started in France in 1978, decided that regarding personal data, data speaking about you, the consent of the user is needed, so it’s mandatory. So then we had to… You can conceive legal approaches or technological approaches, and for example, I’m very interested in an Indian project, Digital Empowerment and Privacy Architecture, that does organize technically a way to check the consent in a way that tries to be an infrastructure to unleash innovation. This is not a burden, this is an infrastructure for innovation. So you can implement it. on various approaches, and some are better than others, but there is a strong principle there. And for example, just to mention it, there is also a legal controversy between France and Anglo-Saxon countries because we consider personal data as something like your body. You are not the owner of your body. You cannot decide anything regarding your body, and you cannot decide anything regarding your personal data. There are some fundamental rights. In the world of the copyright, this is a different approach, and that’s great. We can extend. But in France, we have strong commitment that you cannot treat personal data as an average data.

Helani Galpaya:
I think that this approach many countries are taking, seeing a difference between sharing weather data, for example, and very different from personal data. I think we talked about it earlier as well. I think what the minister is talking about is sort of the policy legal, and then we talked about some of the technical solutions. And I think at a practical level, there’s private data, but there’s also commercially sensitive data. So our approach, for example, was to say we will not work with one telecom operator’s data because that’s highly commercially sensitive where the base stations are, which direction it’s facing, the power on those base stations, et cetera. We said we’ll go into this sort of kind of data and analytics to understand where people live, where people move. All of that is possible with mobile network data, but we will only do it if we have more than one company contributing data, and then we sort of anonymize at a company level. Like the base stations are not known whether it’s company X or Y. So the more data that you pool, that brings another level of protection on commercially sensitive data in our case, yeah.

Henri Verdier:
Yes, of course. Statistic anonymization can be. useful for some purposes. If you want to make epidemiology for example, if you want to understand where people, population goes in case of natural disaster, if you want even to check if France or Germany did respect more the lockdown during the Covid, do you know that we did respect the lockdown more than Germans? Yes, we learned this through operators data because of course everyone including me would have bet that German would have been more strong. So you can have a very important use of statistic data, but except this approach I think that you can never really anonymize a personal data, the data describing one person. You can delay the name, the age, at some point someone will find you. So if you want to build knowledge regarding one person, someone, here you need other approaches like confidential computing, technological solutions.

Helani Galpaya:
I agree and I think sort of it depends on the situation and what the company is releasing data for, right? I think what we’re saying is at aggregate level there’s a lot of use you can make out of it. You don’t need anything that’s even remotely identifiable, you can talk about groups of people. But I mean Covid was a classic example to understand movement that was good enough. Facebook check-in data was being used in some governments to see where people are, but at some point if you’re looking at an outbreak and then you’re trying to contact trace using data, then that’s a very different level of privacy violation and you need the legal backing to say okay this is a national emergency and I’m now going to actually identify who owns that cell phone because we need to know where that person may have spread, you know, moved and then spread the virus. So it depends on the question you’re asking really, what company data can do and what the safeguards should be.

Cynthia Lo:
Thank you. I also want to make sure, give an opportunity to Mike, if you have any thoughts on safeguards and privacy and consent on private sector data being released.

Mike Linksvayer:
I think really all of the key points have been covered already. So I don’t think I really have anything substantive to, I mean, directly on point to that, but I feel it’s related to another thing that’s happening now that’s kind of related to open data and open code, which is a debate around how open, quote, open source AI has to be. And a lot of, and the reason why there’s a link is because a lot of times data can’t be fully opened for privacy and other reasons. And yet society can still benefit from having some of the outputs of that training often called the model. And so there’s kind of a debate about what kinds of sharing of data that’s being used to train and open AI model makes it open or not. To some extent, this is a very academic debate, but at the same time, it could end up being, you know, reflected in law as, you know, because it’s often recognized that that open source might need special treatment because of its non-proprietary nature, but, you know, it can be, there are kind of a bunch of different ways that you can, for a data corpus that’s used to train an AI model, the raw data is extremely useful, obviously, but there are other things that can be useful at all that can be useful as well. For example, a, you know, a description of the schema of all the data that you’re using so that other people can bring their own data. and replicate the model, if like two parties have access to similar private data sets, then they can be close substitutes for each other. So I think that’s like a burgeoning area that all these issues kind of come back together around.

Henri Verdier:
So Mike, this is not just an academic issue, it’s a question of which data you did use to train the model. First, you are in California, I feel. I have read that one of the important reasons of the screenwriters’ strike was generative AI, because they wanted to be sure that the work will be respected, so it can have a very concrete and important impact. And if we don’t pay attention to this, first we will delete all the international architecture of intellectual property, then we’ll create new disbalance and inequalities, because some big companies will take the profit of every creation of all humankind, because they will take everything, everything we did, dream, write, learn, publish, share, and they will use it to train some big monopolistic models. So from my perspective, this is not just an academic controversy, this is one of the most important topics of those days, and we have to be sure, and we can also think about security issues, security concerns. So the traceability, if I may, of how was this model educated is a very, very important issue, and we don’t have proper answers today, because you can…

Mike Linksvayer:
I agree with you, just to clarify the academic comment, it was exactly what can you call open or not, it’s this thing that’s somewhat academic, but the fundamental issues are of extreme importance, and I… and I really appreciate the, you know, French government’s direction around open source AI. It is extremely important.

Helani Galpaya:
No, I mean, just to say there’s like a million conversations about training data and the problems of using certain data for training. I don’t think this is the forum for it. Women, people of color, developing country people are at the receiving end of decisions made by models made that were trained on data that does not talk about them. So, you know, that’s a whole other field. So I don’t think we need to talk about it. Just to say that completely agree the issues around training data are very real and huge. Another important concern is the definition of privacy itself. Because 10 years ago, to protect my privacy, I just had to protect my personal data and I was protected. Today, I can know a lot about you without knowing anything about you. Because I will educate a model and it will predict something about you. So I cannot protect myself just while protecting my personal data. And not living in the digital world is no longer a safeguard against not being profiled. You can profile me even if I have no email address, no presence online.

Cynthia Lo:
On privacy, I think being able to layer in different data sets and as a result, you have a profile of a person. I think it is fascinating to see different data sets. I think as I’m looking at the time now, I want to move on to our last point on promoting and supporting open code initiatives. Considering all of the topics we talked about with security, safeguards, privacy. What is the best way to really promote open code initiatives and how can member states do so?

Henri Verdier:
So first, there are more and more approaches, and that’s great. So you have a strong European policy, for example. You have a network of open source officers in European governments. You have, I did mention the French law, it was named La Loi pour une République Numérique, so La Loi pour une République Numérique, that imposed to the government to publish everything in open source and fully reusable. We are promoting this is a European foundation for digital commons because we want Europe to take its responsibility and to contribute to finance commons that are important for for freedom and sovereignty and self-determination. So there are a lot of initiatives, but the more I work on this field, the more I observe that financing is not enough, and maybe it’s not the most important part. Really using free software, open source, contributing, allowing your public servant to contribute, paying attention. For example, when we did prepare the DSA, we did quite kill Wikipedia, because we said companies with more than, I don’t remember, 400,000 connection amounts in more than seven European countries has to be a legal representative in every European state. For a big tech company, that’s not very expensive, but for Wikipedia, that’s very expensive. So we need a conviviality, we need a proximity, we need a constant interaction, we need a mutual understanding, and this is maybe the most difficult today.

Helani Galpaya:
I want to add just two things to this, which is, I think, one is capacity. Public sector has very low technical capacity in many of the majority world countries in the developing world, and the expectation of, except for a handful of public sector officials, anyone else being able to contribute code, I mean, it’s a dream for many countries, so maybe what we need are, and that’s great if you can do that, and that’s kind of the aspirational stage you want to be. So instead of that, another solution is to build the communities, because the private sector is a lot more evolved and highly skilled, right? So it’s like, I keep going back to Sri Lanka, you know, the really vibrant open source community, highest number of contributions to Apache, for example, right? So that comes from Sri Lanka, that comes from, you know, being in high-paid export-oriented software companies, but a couple of people really getting this community together to create this. So how can they participate in government-related stuff? I think that needs two things. One is that community building, but they can’t participate in government procurement. That’s really hard. Government procurement is a system that puts out a bid and gives points to a company that has done this ten times before in five reference countries, right? A group of people who come together who don’t have that references, it’s very hard to signal that they can do this. So I think there’s some problem there. Then at a practical level, I think if you want to maybe, you know, not go all out, but at least give some preference for open source, some governments, what they do is, you know, out of a hundred, allocate five to ten extra points, which you get as a bonus if you are proposing an open system. And there’s variations on open, completely open, you know, free and open, you know, open, you know, open AIs, et cetera. So, you know, a gradated set of systems marks in the procurement. So different types of companies can at least have a hope of participating and competing against the large firms. This is kind of the same strategy that governments in the South have used to promote local companies when it comes to government procurement of IT systems. It’s very hard to compete with, you know, I mean, I, for example, purchased on when I was in government pension systems, right? A big company will come and say, I’ve done pension systems in five of these companies, five countries. It’s very hard for a local company. So then we say, well, if you at least have a local partner in the first year for technical support, in the second year for actual deployment, you get five marks. So the same way, you can build up this sort of legacy of open source by allocating marks over time in procurement systems.

Henri Verdier:
It totally takes a point. And that’s interesting, because if you do observe the story of governments, they had technical skills to build bridges, roads, railways. And there is something different in the history of IT. Maybe because the story started in the military era, as you know, with projects to launch rockets from a submarine. And it was, from the beginning, very big procurement, very expensive, with very bizarre rules of conducting projects. And governments should learn to work with ecosystems, as you say, to be maybe a bit more humble, to learn about agile methodology, to agree to start with an imperfect project and to improve it, to have a constant improvement policy. So this is a cultural change. And just to finish, maybe it will be time to conclude. That’s why, from my perspective, there is a strong connection between. open source movements, open government movements, because you need to learn humility, to be an actor within a network of actors, and a state modernization, and maybe the new democracy that we need with collective intelligence, citizen engagement, participation, contribution, et cetera. You cannot work just on one of the three topics. You need to cross the three topics.

Cynthia Lo:
Perfect, thank you. And I think looking at the time, we are almost at time, but before we go to Q&A, I want to make sure, Mike, if you have any thoughts as well on this topic of promoting and supporting open code initiatives.

Mike Linksvayer:
Sure, first, I mean, everything already said has been great, and I have too many thoughts, but I’ll just say one thing. I think what doesn’t get measured doesn’t get paid attention to. It’s fantastic that we have free and open source software advocates within government now, but a much broader set of policymakers need to appreciate the role that open source plays in the economy and development, et cetera, and that’s kind of one of the motivations of the innovation graph that we launched, that we want to, if you want to see numbers that are kind of tuned to your jurisdiction, then you can look at those even if you don’t have a fundamental appreciation of open source and understand that it’s a really big driver of jobs, economic growth. People have used GitHub data to show that more, including policies that support, that foster open source leads to more startup formation, more jobs, and things like this. There’s a really important study from the, or commissioned by the European. Commission, I guess, several years ago kind of putting a floor on the contribution of open source to the EU economy of, I believe, the range was like 65 to 95 billion euro a year. So quite significant and would love to see that replicated in, you know, in other jurisdictions in a way that’s very legible to policymakers who don’t know anything about, don’t have any affinity for open source, don’t know anything about technology necessarily. So I think those making it legible is super important.

Cynthia Lo:
Thank you, Mike. And I think before we move to our Q&A, particularly open source in the social sector, there’s a lot of organizations that work in the social sector that are also open source. We mentioned Digital Pop Goods and there’s also research in India, Kenya and Mexico taking a look at what were the drivers for social sector open source organizations. How are they funded? What are their initiatives as well that I think in another section we can explore more on open source in the social sector? OK, and I believe also Malakumar was instrumental in leading that research. As we move to our Q&A section here, opening the floor up to anybody who has any questions here in person, please.

Audience:
Hi, good morning, everyone. My name is Sumanna Shrestha, I’m a parliamentarian from Nepal, and I was very curious to attend this because I have a lot of questions. So the first one is how do you incentivize these entities to actually share data? When you think about different sectors that exist to improve the society, you’ve got private sector, obviously. you’ve got government, and you’ve got a very influential INGOs and UN that work. So how, what are some of the ideas, what has worked maybe in Sri Lanka or other parts of the world to incentivize these different actors to actually share data in whatever format, right? Whatever privacy setting format. The reason I ask that is one of the things in my previous life before parliamentarian, what I’ve seen is there is a massive incentive to hoard the data and then come up with insights to then present and say, okay, I have some advantage over everybody else that then sort of warrants funding for me to go out and do something. It could be going and distributing relief material when there are earthquake or disasters, for example. Right, that’s one. And then it’ll be really great to understand a bit more on this French procurement law that you mentioned that you require a certain percentage to be open source. How did you, in Nepal we have a very big distrust, mistrust towards anything that’s open. They think anything that’s free is not good quality, et cetera. So we tend to procure, and you’re smiling maybe because we see the same problem. That’s exactly the contrary. If you have a closed system, you don’t know if there are back doors. Right, so I think maybe also, I understand, but how did you go about building that level of trust in open sources if you’ve seen, if there was something fundamental you did? I think it also maybe pertains to the capacity, right? How many people can actually have the capacity in Nepal to go check the open source code and then see if there are back doors? So what are some of the in-built assumptions that you have? and what are the maybe very focused attention that you paid to strengthen those pillars to then bring this level of trust in open source? I think let’s start with that.

Helani Galpaya:
Okay, I mean I’ll go on the data part I think. Sort of the superficial answer is it’s actually very difficult to get the incentives right for data sharing, right? Data is power and therefore the incentives are to hoard it whether you use it or not actually, that’s the interesting part. So we’ve spent the past year looking at public private data partnerships across Africa, Asia, Latin America, Middle East and the Caribbean and mapped like over 900 different partnerships around data and done some in-depth case studies and we see a couple of things. One is that data sharing is a really high transaction cost activity, right? Because capacities are different, particularly if you’re dealing with a large company and trying to get some data, you don’t even know who to reach because there are regional managers, marketing managers, somebody in San Francisco, et cetera, et cetera, right? So it’s high transaction cost and what that does is it privileges the really large companies because they can come negotiate with the government, spend the money and they can also enter a market and subsidize something with data with a very long-term view. So I mean Microsoft, for example, is a case in point where they can go and do something in a country that’s in the early stages of development, digitization because in 10 years when everyone gets a computer, you know that operating system is more likely to be a Microsoft one. those kinds of investments for the long-term in data partnerships, right? Many small ones don’t. So partnership building, this is why I said the easy answer is it’s difficult, because partnership building around data are really difficult. So the incentives have to be set up. So we talk often about this incentive of, you know, you can get data from Uber, if it is in Nepal, but I’ll talk about Sri Lanka, that has some percentage market share. Uber can give it to government or civil society to understand, you know, where people are or something. But actually, if you now combine with two other local taxi companies and share the data back with Uber and everybody in a non-commercially sensitive way, it’s now suddenly much more useful to Uber, it’s useful to the local person, it’s useful to the transport planning person in government as well. So you kind of find the incentive system that makes it worthwhile for the large and the small operators to come and play. And then you set up the technical infrastructure for data sharing, of course, right? And you give them the kind of confidence that says we are not going to share sensitive data, you know, like in the telecom example I gave. You also then put the legislation around it for telecom data in particular. We really had to sort of make sure that the telecom regulators didn’t have a problem. So you need sort of research exceptions or public policy or journalistic exceptions in data sharing, particularly if it comes to sensitive data. So, I mean, bridging those transaction costs and getting the incentives right, those are the broad principles, but really finding the incentives are a case-by-case basis. So we find the successful ones are often where like a middle broker is involved in getting these data partnerships going, right? Somebody who can convene multiple people. So a classic example would be in India, UN had a now defunct, but a Pulse Lab Jakarta, the UN data governance system sort of, you know. They would sit in the middle and convince government that they need to play in this data game, that they need to use private sector data. They develop that capacity, because government doesn’t automatically say, I’ll use private sector data, right? And sometimes governments can’t say that either, because the census department often has a rule that thou shalt conduct national surveys, not use call detail records for population projection, right? So they don’t give up, so work with government. Then bring like five different private sector players together. Sometimes it involves paying for the data, sometimes it’s setting up the incentive systems. The Global Partnership for Sustainable Development Data in Africa brought together the Group on Earth Observations, which allowed satellite data about Africa, like as a block, and to any country who wanted it, they made it available. So data brokerage also plays a role. I’m not saying government can’t be a data broker, but that role of a data broker is really important, because otherwise what you have is one-off data transactions, right? I mean, during COVID, everyone managed to get some Facebook data to understand where people were. That’s not really useful, because now COVID is over, none of those data is flowing anymore to government or to civil society. So to set it up in a sustainable way that you can understand development and use that data requires a bit more.

Henri Verdier:
Thank you for your very precise and important questions. First, as you said, most people of power has quite an instinct of hiding the data. But this is the old approach. First, obviously this is not the best global organization, as you can easily see in the bureaucracy, for example. When I did join the French government 10 years ago to lead the open data policy, sometimes four different administrations were sharing the same data with mistakes, and they did spend a lot of time and money to… to sell data between administrations of the same government. So it was unuseful, expensive, long. I discovered, because it was expensive, sometimes some administration did use very old data sets because they did buy it just every four years, for example, to the neighbor and with the same money because we are one state. So this is not the best global organization and maybe this is not the best strategy. What I’ve learned from the digital economy story is that platform strategies are better. If you have data, you share this and you become the center of the ecosystem and you have more influence, maybe less direct power but much more soft power. And the story of, I don’t know, Microsoft, Google, Amazon, is a story of people sharing their data, not of people hiding their data. So first, yes, this is a natural instinct but we have to fight it because this is a stupid strategy to hide your data. Then, regarding the controversies regarding open source, yes, in France, we usually consider that open source is the best security approach because you can check, you can contribute, so if you discover something, you can fix it. That’s funny because, for example, if you observe the story of European countries, now everything is converging but 20 years ago, the French public sector did use a lot of open source and free software and not the private sector and in Germany, it was the contrary. The German companies did use a lot of free software and not the German government. So you have also national histories, of course. It depends of your… But in France, probably, it’s also political. Most public decision-makers consider that open source is less expensive. And if it’s not, because sometimes it has cost, of course. But you will spend your money to pay national workers, not benefits in Seattle. So that’s a better use of your public money, because you create value in your country. And usually it’s less expensive. A better security, and maybe a better democracy. You know, in the Declaration of Human Rights, in 1789, we say that the government has to be accountable, that every citizen has a right to understand what the government is doing, and to check if this is the most efficient approach. So now, most of the governmental actions are made through big and complex systems. If you don’t have the right to understand the black box, you are not a perfect democracy. And you have to rely on someone that pretends to make the best, but you don’t know. So the mix of cost, security, and democracy makes that in front, this is not a controversy anymore. Most people in the public sector encourage this approach. If you need a strategy, you did ask for. The first easy step is about public procurement. I’m not speaking about buying software. I’m speaking about buying services. I remember 10 years ago, the city of Paris wanted a network of self-driving cars. But they did write in the procurement, and I will access to every data, and I will share it in open data. And the companies didn’t want to. But they said, that’s my market, my procurement. If you don’t accept, I will take another solution. So for water, for transport, for when you buy a service, or you delegate a public service, just think about writing one clause saying. And I will take the data, and I will share the data. That’s not so difficult if you have a competitive market. The second thing, of course, is to explain, to exchange, to build an ecosystem. Yes, to be frank, I don’t think that those strategy can be done if you don’t have any ecosystem. It can be an ecosystem of open source software. It can be an ecosystem of startup or a big tech company. I don’t care, but you need to work with the civil society or private sector. You need to work with outside of the government. If you cannot rely on some skills and competencies and energy and innovation and creativity, that’s very difficult. And regarding the La Loi pour une République Numérique, so to be precise, we wrote that every software that the government develop or we pay for development has to be open source. It was built on the premises of the law for free access to information. During the 70s, we decided, so we wrote that the citizen has a right to ask for every information regarding government action. So how did you pay? Where did the money go? And we did build on these premises. So of course, when we buy, as I said, a consumer product, we don’t ask for open source. But when we finance the development of the product or when we develop ourself, this is mandatory. Regarding the competencies, as you said, this is very often a problem. But you don’t really need very, very, very skilled people, because we are speaking about a simple IT. And sometimes, for example, just a funny story. Ten years ago, I did create also the job of chief data officer for the French government. And I did hire a great data scientist to fix and to build good public policies. And I did hire brilliant people, and we did help maybe 100 administrations to improve some public policies. And after four years, they went to me and they told me, this job is a bit boring. We did just use Excel software and linear regression. Because government has very structured data and very simple questions. You don’t need to make a generative AI on a big data with a big… You don’t need this to fix 80% of the problems. If you have simple people with simple software, but very focused to have an impact. And very often, we did build, for example, in France, the French ID system, France Connect. Which is used now by 40 million people every week. We are a small country regarding to India. So 40 million people is something in France. I did build it with six developers in six months. The global price was 600,000 euros. Of course, if I had decided to buy it to some big companies that you can imagine, it would have cost, I don’t know, 30 million euros. But when you do it yourself with simple principles, with this agile methodology I did mention. So make a first minimum viable product and then improve it. That’s not so expensive and you don’t need a Nobel Prize, if I may. You just need good and serious developers. And maybe one last thing. I was there when we decided this law regarding… So some people had concerns. So we decided to mention a cybersecurity exception. So if the cybersecurity agency say that publishing the code is dangerous, we won’t. It was five years ago. It did never happen. They did never find a software publishing the code was dangerous. So it was a security to make people comfortable, and it was never useful.

Helani Galpaya:
Let me just make a quick thing. I think this is quite amazing. Just one little challenge, depending on the structure of your civil service, is to attract people with skill to do this kind of development. You need to look at what other options you have. And particularly in South Asia, they can work for a global IT firm, usually for five to 10 times the government’s salary. And that’s a real incentive problem. So the way some countries deal with it is to have these other structures, like a government-owned private company that does a lot of this IT development, who don’t have to abide by government pay scales. And that then suddenly makes it attractive, somebody who wants to do civic tech, public technology, but also isn’t compromising and making low government salary.

Henri Verdier:
If I can say something, because that’s very important. So most of the people that went to work with me did divide their salary by two. But you can have very skilled and dedicated people if you give them a mission and autonomy. But if you ask them to divide their salary and to obey to a big hierarchy chain and to respect a stupid and a very complex framework, so you have to give them a mission, a real mission. Let’s fight unemployment. Let’s educate. Let’s end kind of autonomy. And that’s why we have to change the way we do organize bureaucracy. But that’s not impossible. And actually, a lot of countries did it. And more and more, I feel. And always with people coming from the private sector. Private is a big, important open source ecosystem. It can be also Wikipedia, GitHub, OpenStreetMap. In France, we work a lot with the OpenStreetMap community. Linux, Debian. It’s not always private firms. But that’s outside of the government.

Cynthia Lo:
Thank you. And taking a look on our virtual attendees, we have some questions on whether there are government tools regarding securing data. Just double check. And potentially, I think let’s start with that first. If there’s any thoughts on that. If not, we do have another question as well.

Mike Linksvayer:
I have a small comment on that that might not be directly addressing it. But I just want to highlight how important basically cybersecurity is for protecting data. If you have a breach to an exploit, then your data is exposed, no matter what other measures you have taken. And I want to kind of tie that back into the previous discussion. I think the idea that open source is more secure because everybody can audit it and see exploits and fix them is sort of true. But also a little bit of a double-edged sword and can actually be useful and is very pertinent in policy conversations now. Because one analogy is that open source is free, but it’s also like a free puppy that you have to take care of. And due to incidents like Log4J, I think the attention of policymakers has been focused that open source is part of our societal infrastructure. And it’s something that we can’t only rely on the developers of individual projects to adequately secure. So there needs to be kind of investment from a bunch of stakeholders, including governments in making sure that that ability to for everybody to review the code and make fixes that actually acted on. And Germany is really a leader in this with the sovereign tech funds, but there are others in the U.S. Open Technology Fund and kind of others brewing. But I think that’s a really important point that potential for open source being more secure actually needs to be actioned and needs coordinated action. And I think in sort of another way that this kind of loops back on itself is that those decisions about where to invest, what open source code is actually no critical for power plants, for elections or whatever, you actually need data to be able to identify where you make those investments. Otherwise you’re boiling the ocean. So it’s really tangential, but it just basic cybersecurity is just absolutely crucial for protecting data.

Henri Verdier:
You’re completely right. Open source creates a possibility to check, but someone has to do it. I have another funny experience. In France, we had interesting free bureautic suit, so Word, Excel, it was named Framasoft. And during the COVID, the Ministry of Education decided and said publicly. I will use Phramasoft. And the people from Phramasoft did yell and contest this, and they said, are you crazy? Are you really considering to put 1 million teachers and 10 million students on my infrastructure without giving me anything? But I will die. You have to finance infrastructure, servers, or you will kill me. And that was funny because it could have been seen as a big victory. That’s the French Ministry of Education, one of the biggest international administrations, bigger than the Red Army. So it could have been seen as a victory, but it was the kiss of death. And so we have to be serious and to nurture and protect and finance this ecosystem, or we will kill it. There is no such thing as a free software, free lunch. Someone has to pay a bit.

Cynthia Lo:
Thank you. I know we are at time, but I want to double check to see if anybody has any questions in the audience here or online. All right. Well, thank you so much, everybody, for attending. Any concluding thoughts from our speakers here? Nope, not a problem. Well, thank you so much for everybody to attend this very early morning in Japan session. And we look forward to any other thoughts that you have on OpenCode on development. Thank you.

Audience

Speech speed

176 words per minute

Speech length

440 words

Speech time

150 secs

Cynthia Lo

Speech speed

157 words per minute

Speech length

1275 words

Speech time

488 secs

Helani Galpaya

Speech speed

177 words per minute

Speech length

3638 words

Speech time

1234 secs

Henri Verdier

Speech speed

155 words per minute

Speech length

5032 words

Speech time

1950 secs

Mike Linksvayer

Speech speed

160 words per minute

Speech length

3113 words

Speech time

1165 secs

Telegram Bot Test Test

 Text, Computer Hardware, Electronics, Hardware
Telegram Bot Test Test 2

China pushes blockchain adoption in banking sector

While maintaining restrictions on crypto trading, China continues to promote blockchain integration …

EU universities could anchor AI strategy

Funding and scaling challenges limit current potential.

Financial platform upgrade brings AI tools to global users

AI tools are expanding globally as Google upgrades Finance, enabling users to access localised data,…

Human work roles shift alongside AI

Long term success depends on aligning AI with workforce needs.

Intentional organisations amongst leading companies in the AI era

HR is shifting towards a strategic role in governance and capability building.

EU and Morocco launch digital dialogue

The initiative focuses on AI, infrastructure and digital innovation.

Armenia plans AI road scanning system

The system will analyse road conditions and recommend repairs.

Geneva Cyber Week to bring diplomacy, cyber policy, and AI security debates together

Policymakers and technical experts will address cyber stability and resilience amid rising geopoliti…

Japan approves APPI amendment bill on personal data, AI training, and fines

The APPI amendment bill approved by Japan's Cabinet would reshape personal data rules for enforcemen…

French data protection authority sets out 2026 GDPR and AI guidance agenda

Planned CNIL work for 2026 covers GDPR compliance, AI models, health data, and security recommendati…

UK government reviews regulatory options for enterprise connected devices

Enterprise connected devices are the focus of UK government's plans to update security principles an…

UNCTAD report notes global trade growth alongside increasing fragmentation risks

UN trade data highlights higher import costs and tighter financial conditions as key challenges faci…

OpenAI launches child safety framework to address AI risks

Child safety efforts expand as OpenAI introduces coordinated approach to reduce online harm.

Consultation opens on measuring AI energy consumption and emissions in the EU

A new EU consultation seeks input on measuring AI energy consumption, emissions, and efficiency.

EU advances AI copyright safeguards through GPAI taskforce discussions

GPAI discussions focus on improving transparency and accountability in AI copyright compliance.

Greece moves to restrict youth social media access with new digital age rules

A new regulation in Greece requires platforms to verify age and strengthen protections for minors on…

Government Digital Service and DSIT publish Digital and Data Benefits framework

A new framework from the Government Digital Service covers AI, service transformation, data, capabil…

Digital equity advances as UNESCO promotes Universal Acceptance

Online inclusion increases with Universal Acceptance, while UNESCO advances policies to empower equi…

Experts warn of potential quantum disruption to blockchain security

The shift to quantum-resistant infrastructure is a key challenge for decentralised networks, requiri…

European Business Council in Japan holds first cybersecurity conference in Tokyo

Tokyo hosted the EBC Digital Committee’s first cybersecurity event, featuring expert presentations, …

Singapore to update cybersecurity standards and vendor obligations amid AI-enabled threats

Singapore says it will update cybersecurity standards and vendor obligations amid AI-enabled threats

Employee interest grows in crypto payroll options

Clearer regulation and simpler conversion tools are seen as key factors that could accelerate mainst…

European Commission consultation closes on draft AI Act procedure rules

Draft AI Act implementing rules on model access and procedural safeguards are moving forward as the …

Latvia gains EIB expertise to scale technology companies

High-growth companies in Latvia receive targeted guidance and investment support through EIB partner…

Digital Public Goods Alliance roadmap incorporates UNESCO Open Solutions

As per UNESCO, its Open Solutions are part of the Digital Public Goods Alliance roadmap and support …

Corning and Meta start construction on North Carolina AI cable facility

A new manufacturing expansion in North Carolina aims to support AI infrastructure, enhance domestic …

Eurasian Development Bank Fund expands digital cooperation with Uzbekistan

EDB Fund and Uzbekistan officials agree to create a joint roadmap for digital projects, aiming to bo…

UK data reveals alarming growth in online child abuse cases

New evidence by IWF shows online child abuse is driven by scale and weak enforcement.

EU digital identity strengthens after 20 years of .eu expansion

The .eu domain celebrates two decades of advancing cross-border digital identity across the EU.

The implementation of the EU AI Act with a focus on general-purpose AI models

The European Union is progressing into the implementation phase of its Artificial Intelligence Act, …

Adobe launches a free AI learning tool for students

Learning materials from PDFs, Docs, and notes are easier to generate with Adobe’s Student Spaces too…

ICO launches online privacy campaign for parents

A new ICO campaign focuses on helping parents talk to young children about protecting personal infor…

Project Glasswing unites tech firms for AI-driven cyber defence

By integrating AI into cybersecurity workflows, partners seek to improve resilience against increasi…

New law strengthens protections for healthcare patients in Brazil

The statute introduces enforcement mechanisms and frames violations of patient rights as breaches of…

Transparency push for automated recruitment in the UK

Regulator highlights risks of automated hiring systems as adoption accelerates in the UK.

Kazakhstan Machinery Forum examines technology policy, industrial development and energy strategy

Discussions covered procurement policy, localisation and industrial modernisation efforts.

MIT system boosts data centre storage efficiency

A method improves storage efficiency by reducing performance differences across shared storage syste…

IMF warns of rising risks in tokenised financial systems

Accelerating tokenised markets are reshaping finance, raising IMF concerns over stability, oversight…

China sets standards for AI ethics review and algorithm accountability

New rules support innovation within China's model of AI ethics and data governance.

UAE’s Technology Innovation Institute launches Falcon Perception AI model

The model combines vision and language with aim to support real world AI applications.

US agencies warn of cyber intrusions into critical infrastructure systems

The advisory links the activity to previously identified advanced persistent threat (APT) groups ass…

National Crime Agency to receive CSEA reports under UK Online Safety Act rules

New UK rules require certain platforms to register with the National Crime Agency and submit reports…

IAPP Global Summit session examines AI, privacy, and the courts with US federal judges

James Boasberg and Allison Burroughs used an IAPP panel to discuss AI, surveillance, and the courts.

ENISA opens public review of draft EUDI Wallet cybersecurity scheme

A draft EUDI Wallet cybersecurity certification scheme has been published by ENISA for public review…

Transparency push for online advertising systems

A shared notification system could improve transparency and trust.

Student AI rights framework unveiled

Proposal aims to guide responsible AI use in the US education system.

UK Research and Innovation review calls for reform at The Alan Turing Institute

Experts call for reforms to improve governance, strategic clarity, and effectiveness in delivering p…

CNN develops agent infrastructure for AI media trading

Plans for automated media transactions are underway, with CNN developing new systems aimed at improv…

GEANT Security Days 2026 to address AI, internet resilience, and cyber resilience

Utrecht will host GEANT Security Days 2026 with keynotes and discussions on AI, resilience, and secu…

ENISA conference in Cyprus to focus on EU cybersecurity certification

A new ENISA conference agenda shows EU cybersecurity certification remains a live policy and impleme…

DMCC Act 2024 brings UK ADR reporting rules into force

Accredited ADR providers in the UK must now submit annual reports under the DMCC Act 2024.

MIT study finds steady AI growth reshapes work

Workplace roles are gradually shifting towards oversight and management of AI systems as automation …

AI improves structured and coherent legal systems for better regulation

Regulatory analysis is increasingly supported by AI to map interdependencies within legal frameworks…

OpenAI presents policy proposals addressing AI’s economic and labour impacts

Analysis highlights OpenAI's plans for the AI economy and public infrastructure governance.

South Korea-France partnership reshapes AI and technology cooperation strategy

Strategic agreements highlight South Korea and France's efforts to cooperate on AI and semiconductor…

Anthropic scales AI compute to meet rising global demand

Expansion of compute infrastructure supports rising demand for Claude models, improving scalability …

Penguin Random House sues OpenAI for copyright infringement over ‘Coconut the Little Dragon’ series in Germany

Penguin Random House has sued OpenAI in a Munich court, alleging ChatGPT infringed copyright by repr…

China guidelines reshape e-commerce growth and digital trade strategy

Updated rules reinforce China's governance approach to e-commerce and platform regulation.

AI safety may hinge on missing human body awareness

Lack of internal bodily awareness in AI systems, a feature key to human cognition and behaviour regu…

AI chatbots are reshaping classroom debates, raising concerns over homogenised discussion

Yale students and academics say AI chatbots are making classroom discussions more polished but less …

South Korea advances energy transition strategy to strengthen resilience and green industry

Renewable energy expansion in South Korea targets 100GW capacity and reduced fossil fuel dependence.

UN kicks off Global Mechanism on ICT security, road ahead murky

The long-awaited Global Mechanism has finally launched, creating the UN’s first permanent forum on I…

Sweden’s Riksbank urges households to keep cash and multiple payment options for crisis preparedness

Sweden’s central bank, the Riksbank, is urging households to strengthen payment preparedness amid ri…

UN warns of urgency in shaping responsible AI governance

Rapid AI development amid geopolitical tensions has raised AI governance concerns, calling for coord…

Power hardware shortages are delaying AI data centre expansion, despite record investment

US AI data-centre expansion is being constrained by shortages of power-delivery equipment such as tr…

Microsoft markets Copilot as a productivity boost but warns it is ‘for entertainment purposes only’

Microsoft’s Copilot Terms of Use state that the AI is 'for entertainment purposes only' and not for …

Will AI turn novel-writing into a collaborative process

The article argues that a novel’s value extends beyond prose to include premise, plot and character,…

Digital Services Act agreement links European Commission and EUIPO on online IP enforcement

EUIPO will support Digital Services Act work on counterfeit goods, pirated content, and online intel…

US Supreme Court narrows ISP copyright liability, sharpening focus on intent with potential implications for generative AI

A unanimous US Supreme Court ruling narrowed the circumstances under which internet service provider…

ENISA launches consultation on EU digital wallet certification

EU member states are expected to introduce at least one certified digital identity wallet by the end…

EU lapse in child safety rules raises concerns

Nearly 250 child rights organisations warn that the regulatory gap could undermine coordinated effor…

Advocates push for transparency rules in student AI systems

Universities are urged to adopt the guidelines to strengthen accountability and protect students as …

AI and 6G strategy drives South Korea’s digital transformation agenda

Digital policy in South Korea evolves as AI development plan supports infrastructure, skills and cyb…

Russian draft laws introduce licensing and limits on digital assets

The legal framework introduces structured digital asset rules under law, including licensing, invest…

Brazil expands AI in public services through Fala.BR reform

Digital governance in Brazil evolves with Fala.BR AI adoption supporting transparency and anti-corru…

China advances new power grid strategy to support clean energy transition

National reforms advance with a new power grid plan integrating renewables and smart technologies.

Gallup finds AI is shaping some college students’ academic choices

New Gallup findings suggest AI is shaping academic choices for some currently enrolled college stude…

Brazil launches national assistive technology centre to advance disability rights

Disability policy evolves as a new assistive technology centre supports autonomy and social particip…

Commission invests in fact-checking to combat disinformation

Five million euro grant backs independent verification.

UN commissioner calls for human rights-centred digital governance at GANHRI conference

Volker Türk told GANHRI that digital governance, surveillance, and AI must be addressed through huma…

University of South Wales becomes the first in the UK to AI qualification as part of a degree

Business and Management students at the University of South Wales earn dual AI accreditation and use…

EU interim ePrivacy derogation for voluntary CSAM detection expires

After failed negotiations on an extension, the EU's interim ePrivacy derogation expired on 3 April 2…

Kazakhstan positions AI at heart of industrial strategy

A dedicated AI university and national training programme aim to build skilled workforce.

Oracle expands AI options for US government agencies

Advanced AI infrastructure by Oracle helps the US government meet high-stakes operational and citize…

OHCHR seeks inputs on protecting human rights defenders in the digital age

A new OHCHR call invites states, civil society, industry, and other stakeholders to submit evidence …

ICT4Peace hosts workshop to support preparations for Geneva 2027 AI Summit

ICT4Peace hosted a launch event at the GenAI Zürich 2026 conference to support preparations for the …

US agencies launch national AI workforce initiative

Joint research will examine how AI is reshaping jobs, employment patterns, and broader economic outc…

Microsoft commits $10 billion to Japan’s AI future

The investment supports Prime Minister Takaichi's goal of driving economic growth through technology…

Nova Scotia launches five person AI team to support government operations

Public sector adoption of AI grows as Nova Scotia introduces new roles and systems focused on improv…

EU delegation in China calls for sustainable e-commerce and safety standards

The European Parliament delegation reviewed e-commerce trends, aiming to strengthen international co…

Global cyber stability conference set for May 2026 in Geneva

Organised by UNIDIR, the event will assess past and present developments in ICT security while explo…

World Economic Forum signals new phase for frontier technologies

AI scaling is shifting from algorithmic progress to physical limits, with electricity demand and gri…

EU strengthens IP enforcement under Digital Services Act

Reinforcing IP implementation targets systemic risks on major online platforms, including counterfei…

Canada reviews Privacy Act to modernise data protection and digital governance

Federal review of the Privacy Act focuses on secure data reuse and improved public service delivery.

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

AI regulation changes, Amnesty International claims, could undermine safeguards against discriminati…

Digital services trade reshapes global economy

Technology-enabled digital services drive trade growth, yet a wide divide remains between advanced a…

UK’s Ofcom report reveals evolving online habits and growing AI reliance

Rising AI adoption and passive social media use are reshaping how UK adults engage online.

IBM and ETH Zurich announce partnership on AI and quantum algorithms

Both organisations believe algorithms will define the next computing revolution as quantum and AI te…

Malwarebytes highlights Microsoft findings on WhatsApp attachments used in Windows attacks

A WhatsApp attachment campaign targeting Windows users used social engineering and built-in system t…

Cyberattack on Hasbro exposes vulnerabilities in large enterprise systems

Following the cyberattack, Hasbro faces operational delays and an ongoing investigation into potenti…