Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112

8 Oct 2023 03:00h - 05:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Yik Chan Chin

In thorough discussions concerning China’s data policy and the right to data access, correlated with Sustainable Development Goal 9 (Industry, Innovation, and Infrastructure) and Sustainable Development Goal 16 (Peace, Justice and Strong Institutions), China’s unique interpretation of data access has become a focal point. According to the analysis, the academic debate and national policy in China are primarily driven by an approach that interprets data as a type of property. This perspective divides rights associated with data into three fundamental components: access, processing, and exchange rights. It posits that these rights can be traded to generate value, as explicitly stated in the government’s policy documents.

However, this policy approach has sparked substantial critique for its disregard of other significant aspects of data access. Chinese policies predominately fail to recognise data’s inherent character as a public good. The academic sphere and governmental policy make scarce acknowledgement of this, undervaluing its potential contribution to societal advancement beyond merely commercial gains. Along these lines, the rights and benefits of individual citizens are often overlooked in favour of promoting enterprise-related interests.

The country’s data access policy is primarily designed to unlock potential commercial value, especially within enterprise data – an aspect contributing to the imbalance of power between individual users and corporations. Such power dynamics remain largely unaddressed in China’s data-related discussions and policy settings, potentially leading to a power imbalance detrimental to individuals.

Given these observations, the overall sentiment towards the Chinese data policy appears to be broadly negative. Acknowledging data’s essence as a public good and according importance to individual rights and power balances would be fundamental components for a more favourable policy formulation and discourse. The inclusion of these elements will ensure that the data policy reflects the principles of SDG 9 and SDG 16, aiming for a balance between enterprise development and individual rights.

Vagisha Srivastava

Web Public Key Infrastructure (WebPKI), an integral component of internet security, provides several benefits such as document digital signing, signature verification, and document encryption. This is epitomised by an incident involving a company named DigiNotar, which, through the misissuing of 500 certificates, compromised internet security – underlining the significance of digital certificates in web client authentication.

WebPKI governance intriguingly falls within the public goods paradigm. While the government traditionally delivers public goods and the commercial market handles private goods, in the case of WebPKI, private entities take noticeable strides in contributing; this defies conventional dynamics in the production of both public and private goods. That said, the government’s involvement isn’t entirely dispelled, with the US Federal PKI and Asian national Certification Authorities (CAs) actively partaking.

The claim that private entities are spearheading WebPKI security governance presents certain concerns. Governments may find themselves somewhat hamstrung when attempting to represent global public interest or generate global public goods in this complex context. As a result, platforms which are directly affected by an insecure web environment (such as browsers and operating systems) secure vital roles in security governance.

The Certificate Authority and Browser Forum, established in 2005, is crucial in coordinating WebPKI-related policies. This forum serves as a hub where root stores coordinate policies and garner feedback from CAs directly. In fact, its influence is such that it sets baseline requirements for CAs on issues like identity vetting and certificate content, since its inception.

Regarding the internal functionings of such organisations, the voting process within the consensus mechanism is diligently arranged prior to the actual voting process. Any formal language proposed for voting is already agreed upon, and the consensus mechanism is established pre-voting. Notably, there is curiosity surrounding how browsers, an integral part of the internet infrastructure, respond to such voting processes.

To conclude, internet security and governance system operate within a complex realm driven by both private and public actors. Entities like WebPKI and the Certificate Authority and Browser Forum play pivotal roles. The power dynamics and responsibilities between these players influence the continued evolution of policies related to internet security.

Kamesh Shekar

The in-depth analysis underscores the urgent necessity for a comprehensive, 360-degree, full-circle approach to the artificial intelligence (AI) lifecycle. This involves a principle-based ecosystem approach, ensuring nothing in the process is overlooked and emphasising a need for coverage that is as unbiased and complete as possible. The subsequent engagement of various stakeholders at each stage of the AI lifecycle, from inception and development through to the end-user application, is seen as pivotal in driving and maintaining the integrity of AI innovation.

The principles upon which this ecosystem approach is formed have been derived from a range of globally respected frameworks. These include guidelines from the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the European Union (EU), and notably, India’s G20 declaration. Taking these well-established and widely accepted frameworks on board strengthens the argument for thorough mapping principles for varied stakeholders in the AI arena.

The analysis also delves into the friction that can occur around the interpretation and application of said principles. Distinct differences are highlighted, for instance, in the context of AI facets such as the ‘human in the loop’, illustrating the different approaches stakeholders adopt at various lifecycle stages. This underscores the importance of operationalisation of principles at every step of the AI lifecycle, necessitating a concrete approach to implementation.

A key observation in the analysis is the central role the government plays in overseeing the implementation of the proposed framework. Whether examining domestic scenarios or international contexts, the study heavily emphasises the power and influence legislative bodies hold in implementing the suggested framework. This extends to recommending an international cooperation approach and recognising the potentially pivotal role India could play amidst the Global Partnership on AI (GPA).

The responsibility of utilising these systems responsibly does not rest solely with the developers of AI technologies. The end-users and impacted populations are also encouraged to take on the mantle of responsible users, a sentiment heavily emphasised in the paper. In this thread, the principles and operationalisation for responsible use are elucidated, urging a thoughtful and ethical application of AI technologies.

An essential observation in the analysis is the lifecycle referred to, which has been derived from and informed by both the National Institute of Standards and Technology (NIST) and OECD, with a handful of additional aspects added and validated within the paper. This perspective recognises and incorporates substantial work already performed in the domain whilst adding fresh insights and nuances.

As a concluding thought, the analysis recognises the depth and breadth of the topics covered, calling for further in-depth discussions. This highlights an open stance towards continuous dialogue and the potential for further exploration and debate, possible in more detailed, offline conversations. As such, this comprehensive and thorough analysis offers a wealth of insights and provides excellent food for thought for any stakeholder in the AI ecosystem.

Kazim Rizvi

The Dialogue, a reputed tech policy think-tank, has authored a comprehensive paper on the subject of responsible Artificial Intelligence (AI) in India. The researchers vehemently advocate for the need to integrate specific principles beyond the deployment stages, encompassing all facets of AI. These principles, they assert, should be embedded within the design and development processes, especially during the data collection and processing stages. Furthermore, they argue for the inclusion of these principles in both the deployment and usage stages of AI by all stakeholders and consumers.

In their study, the researchers acknowledge both the benefits and challenges brought about by AI. Notably, they commend the myriad ways AI has enhanced daily life and professional tasks. Simultaneously, they draw attention to the intrinsic issues linked with AI, specifically around data collection, data authenticity, and potential risks tied to the design and usage of AI technology.

They dispute the notion of stringent regulation of AI at the onset. Instead, the researchers propose a joint venture, where civil society, industry, and academia embark on a journey to understand the nuances of deploying AI responsibly. This approach would lead to the identification of challenges and the creation of potential solutions appropriate for an array of bodies, including governments, scholars, development organisations, multilateral organisations, and tech companies.

The researchers acknowledge the potential risks that accompany the constant evolution of AI. While they recall that AI has been in existence for several decades, the study emphasises that emerging technologies always have accompanying risks. As the usage of AI expands, the researchers recommend a cautious, steady monitoring of potential harms.

The researchers also advise a global outlook for understanding AI regulation. They posit that a general sense of regulation already exists internationally. What’s more, they suggest that as AI continues to grow and evolve, its regulatory framework must do the same.

In conclusion, the research advocates for a multi-pronged approach that recognises both the assets and potential dangers of AI, whilst promoting ongoing research and the development of regulations as AI technology progresses. The researchers present a balanced and forward-thinking strategy that could create a framework for AI that is responsible, safe, and of maximum benefit to all users.

Nanette Levinson

The analysis unearths the growing uncertainty and expected institutional alterations taking centre stage within the sphere of cyber governance. This is based on several significant indicators of institutional change that have come to the fore. Indicators include the noticeable absence of a concrete analogy or inconsistent isomorphic poles, a shift in legitimacy attributed to an idea, and the emergence of fresh organisational arrangements – these signify the dynamic structures and attitudes within the sector.

In a pioneering cross-disciplinary approach, the analysis has linked these indicators of institutional change to an environment of heightened uncertainty and turbulence, as evidenced from the longitudinal study of the Open-Ended Working Group.

An unprecedented shift within the United Nations’ cybersecurity narrative was also discerned. An ‘idea galaxy’ encapsulating concepts such as human rights, gender, sustainable development, non-state actors, and capacity building was prevalent in the discourse from 2019 through to 2021. However, an oppositional idea galaxy unveiled by Russia, China, Belarus, and a handful of other nations during the Open-Ended Working Group’s final substantive session in 2022, highlighted their commitment towards novel cybersecurity norms. The emergence of these opposing ideals gave rise to duelling ‘idea galaxies’, signalling a divergence in shared ideologies.

This conflict between the two ‘idea galaxies’ was managed within the Open-Ended Working Group via ‘footnote diplomacy.’ Herein, the Chair acknowledged both clusters in separate footnotes, paving the way for future exploration and dialogue, whilst adequately managing the current conflict.

Of significant note is how these shifts, underpinned by tumultuous events like the war in Ukraine, are catalysing potential institutional changes in cyber governance. These challenging times, underscored by clashing ideologies and external conflict, seem to herald the potential cessation of long-standing trajectories of internet governance involving non-state actors.

In conclusion, there is growing uncertainty surrounding the future of multi-stakeholder internet governance due to the ongoing conflict within these duelling idea galaxies. The intricate and comprehensive analysis paints a picture of the interconnectivity between global events, institutional changes, and evolving ideologies in shaping the future course of cyber governance. These indicate a potential turning point in the journey of cyber governance.

Audience

This discussion scrutinises the purpose and necessity of government-led mega constellations in the sphere of satellite communication. The principal argument displayed scepticism towards governments’ reasoning for setting up these constellations, with a primary focus on their significant role in internet fragmentation. Intriguingly, some governments have proposed limitations on the distribution of signals from non-domestic satellites within their territories. However, the motives behind this proposal were scrutinised, specifically questioning why a nation would require its own mega constellation if their field of interest and service was confined to their own territories.

Furthermore, the discourse touched on the subject of ethical implications within the domain of artificial intelligence (AI). It highlighted an often-overlooked aspect in the responsible use of AI—the end users. While developers and deployers frequently dominate this dialogue, the subtle yet pivotal role of end-users was underplayed. This is especially significant considering that generative AI is often steered by these very end-users.

Another facet of the AI argument was the lack of clarity and precision in articulating arguments. Participants underscored the use of ambiguous terminologies like ‘real-life harms’, ‘real-life decisions’, and ‘AI solutions’. The criticism delved into the intricacies of the AI lifecycle model, emphasising an unclear derivation and an inconsistent focus on AI deployers rather than a comprehensive approach including end-users. The model was deemed deficient in its considerations of the impacts on end-users in situations such as exclusion and false predictions.

However, the discussion was not solely encompassed by scepticism. An audience member provided a positive outlook, suggesting stringent regulations on emerging technologies like AI might stifle innovation and progress. Offering a historical analogy, they equated such regulations to those imposed on the printing press in 1452.

Throughout the discourse, themes consistently aligned with Sustainable Development Goal 9, thus underscoring the significance of industry, innovation, and infrastructure in our societies. This dialogue serves as a reflective examination, not just of these topics, but also of how they intertwine and impact one another. It accentuates the importance of addressing novel challenges and ethical considerations engendered by technological advances in satellite communication and AI.

Jamie Stewart

The rapid advancement of digital technologies and internet connectivity in Southeast Asia is driving the development of assorted regulatory instruments within the region, underwritten by extensive investment in surveillance capacities. This rapid expansion, however, is provoking ever-growing concerns over potential misuse against human rights defenders, stirring up a negative sentiment.

Emerging from the Office of the United Nations High Commissioner for Human Rights (OHCHR) is a report on cybersecurity in Southeast Asia, bringing attention to the potential usage of these legal legislations against human rights defenders. Concerns are heightening around the wider consensus striving to combat cybercrime. The general assembly has expressed particular apprehension leaning towards misuse, especially of provisions that relate to surveillance, search, and seizure.

What emerges starkly from the research is a disproportionate impact of cyber threats and online harassment on women. The power dynamics in cyberspace perpetuate those offline, leading to a targeted attack on female human rights defenders. This gender imbalance along with the augmented threat to cybersecurity raises concerns, aligning with Sustainable Development Goals (SDG) 5 (Gender Equality) and SDG 16 (Peace, Justice, and Strong Institutions).

The promotion of human-centric cybersecurity with a gendered perspective charters a course of positive sentiment. The protective drive is for people and human rights to be the core elements of cybersecurity. Recognition is thus given to the need for a gendered analysis, with research bolstered by collaborations with the UN Women Regional Data Centre in the Asia Pacific.

An in-depth exploration of this matter further uncovers a widespread range of threats, both on a personal and organisational level. This elucidates the sentiment that a human-centric approach to cybersecurity is indispensable. Both state and non-state actors are found to be contributing to these threats, often in a coordinated manner, with surveillance software-related incidents being particularly traceable.

Additionally, the misuse of regulations and laws against human rights defenders and journalists is an escalating worry, prompting agreement that such misuse is indeed occurring. This concern is extended to anti-terrorism and cybercrime laws, which could potentially be manipulated against those speaking out, potentially curbing freedom of speech.

On the issue of cybersecurity policies, while their existence is acknowledged, concerns about their application are raised. Questions emerge as to whether these policies are being used in a manner protective of human rights, indicating a substantial negative sentiment towards the current state of cybersecurity. In conclusion, although the progression of digital technologies has brought widespread benefits, they also demand a rigorous protection of human rights within the digital sphere, with a marked emphasis on challenging gender inequalities.

Moderator

Throughout the established GigaNet Academic Symposium, held at the Internet Governance Forums (IGFs) since 2006, a multitude of complex topics takes centre stage. This latest iteration featured four insightful presentations tackling diverse subjects ranging from digital rights and trust in the internet, to challenges caused by internet fragmentation and environmental impacts. The discourse centered predominantly on Sustainable Development Goals (SDGs) 4 (Quality Education) and 9 (Industry, Innovation, and Infrastructure).

In maintaining high academic standards, the Symposium employs a stringent selection process for the numerous abstracts submitted. This cycle saw roughly 59 to 60 submissions, of which only a limited few were selected. While this guarantees quality control, it simultaneously restrains the number of presentations and hampers diversity.

Key to this Symposium was the debate on China’s access to data, specifically, the transformative influence the internet and social media platforms have exerted on the data economy. This has subsequently precipitated governance challenges primarily revolving around the role digital social media platforms play in managing data access and distribution. The proposed model for public data in China involves conditional fee access, with data analyses disseminated instead of the original datasets.

One recurring theme in these discussions related to the state-led debate in China that posits data as marketable property rights. Stemming from government policies and the broader economic development agenda, this perspective on data has dramatically influenced Chinese academia. However, this focus has led to a significant imbalance in the data rights dialogue, with the rights of data enterprises frequently superseding those of individuals.

Environmental facets of ICT standards also commanded attention, underscoring the political and environmental rights encompassed within these standards. Moreover, the complexity of measuring the environmental impact of ICTs, which includes carbon footprint and energy consumption through to disposal, confirms the necessity of addressing the materiality of ICTs. The discussion further emphasised that governance queries relating to certificate authorities are crucial to understanding the security and sustainability of low-Earth orbit satellites, given the emergence of conflicts and connections between these areas.

Concluding the Symposium was an appreciative acknowledgement of the participants’ contributions, from submitting and reviewing abstracts to adjusting sleep schedules to participate. Transitioning to a second panel without a break, the Symposium shifted its focus towards cyber threats against women, responsible AI, and broader global internet governance. Suggestions for improvements in future sessions included clarifying and defining theoretical concepts more comprehensively, focusing empirical scopes more effectively, and emphasising the significance of consumers and end-users in cybersecurity and AI discourse. The Symposium, thus, offered a well-rounded exploration of multifaceted topics contributing to a deeper understanding of internet governance.

Berna Akcali Gur

Mega-satellite constellations are revising global power structures, signalling significant strategic transitions. Many powerful nations regard these endeavours, such as the proposed launch of 42,000 satellites by Starlink, 13,000 by Guowang, and 648 by OneWeb, as opportunities to solidify their space presence and exert additional control over essential global Internet infrastructure. These are deemed high-stakes strategic investments, indicating a new frontier in the satellite industry.

Furthermore, the rise of these mega constellations is met with substantial enthusiasm due to their impressive potential in bridging the existing gaps in the global digital divide. Through the superior broadband connectivity, vital for social, economic, and governmental functions, offered by these satellite constellations, along with their low latency and high bandwidth capabilities, fruitful benefits, such as optimising IoT, video conferencing, and video games, can be harvested.

However, concerns have been raised over the sustainable usage of the increasingly congested orbital space. Resources in space are finite, and the present traffic could result in threats such as collision cascading. Such a scenario could make orbits unusable, depriving future generations of the opportunity to utilise this vital space.

European Union’s stance on space policy, particularly the necessity of owning a mega constellation, demonstrates some contradictions. While a EU document maintains that owning a mega constellation isn’t essential for access, it is thought crucial from strategic and security perspectives, revealing a potentially contradictory standpoint within the Union.

Another issue is fragmentation in policy implementation due to diversification in government opinions, as demonstrated by the decoupling of 5G infrastructure where groups of nations have decided against utilising each other’s technology due to cybersecurity issues. With the rise in the concept of cyber sovereignty, governments are increasingly regarding mega constellations as sovereign infrastructure vital for their cybersecurity.

Lastly, data governance is a significant concern for countries intending to utilise mega constellations. These countries may require that constellations maintain ground stations within their territories, thereby exercising control over cross-border data transfers, a key aspect in the digital era.

In conclusion, the growth of mega-satellite constellations presents a complex issue, encompassing facets of international politics, digital equity, environmental sustainability, policy diversification, cyber sovereignty, and data governance. As countries continue to navigate these evolving landscapes, conscious regulation and implementation strategies will be integral in harnessing the potentials of this technology.

Kimberley Anastasio

The intersection between Information Communication Technologies (ICTs) and the environment is a pivotal issue that has been brought into focus by major global institutions. For the first time, the Internet Governance Forum highlighted this interconnectedness by setting the environment as a main thematic track in 2020. This decision evidences increasing international acknowledgment of the symbiosis between these two areas. This harmonisation aligns with two key Sustainable Development Goals (SDGs): SDG 9, Industry, Innovation and Infrastructure; and SDG 13, Climate Action, signifying a global endeavour to foster innovative solutions whilst advocating sustainable practices.

In pursuit of a more sustainable digital arena, organisations worldwide are directing efforts towards developing ‘greener’ internet protocols. Within this landscape, the deep-rooted role of technology in the communication field has driven an elevated demand for advanced and sustainable communication systems. This paints a picture of a powerful transition towards creating harmony between digital innovation and environmental stewardship.

Within ICTs, standardisation is another topic with international resonance. This critical process promotes uniformity across the sector, regulates behaviours, and ensures interoperability. Together, these benefits contribute to the formation of a more sustainable economic ecosystem. The International Telecommunications Union, a renowned authority within the industry, has upheld these eco-friendly values with over 140 standards pertaining to environmental protection. Concurrently, ongoing environmental debates by the Internet Engineering Task Force suggest a broader trend towards heightened environmental consciousness within the ICT sector.

The materiality and quantification of ICTs are identified as crucial facets to environmental sustainability. Measuring the environmental impact of ICTs, although challenging, is highlighted as vital. This attention underlines the physical presence of ICTs within the environment and their consequential impact. This primary focus realigns with the targets of the aforementioned SDGs 9 and 13, further emphasising the significance of ICTs within the global sustainability equation.

In parallel with these developments, a dedicated research project is being carried out on standardisation from an ICT perspective, involving comprehensive content analysis of almost 200 standards from International Telecommunications Union and Internet Engineering Task Force members. This innovative methodology helps position the study within the wider spectrum of standardisation studies, overcoming the confines of ICT-specific research and implying broader applications for standardisation.

Alongside this larger project, a smaller but related initiative is underway. Its objective is to understand the workings of these organisations within the extensive potential of the ICT standardisation sector. The ultimate goal is to develop a focused action framework derived from existing literature and real-world experiences, underlining an active approach to problem solving.

Collectively, these discussions and initiatives portray a comprehensive and positive path globally to achieve harmony between ICT and sustainability. Whilst there are inherent challenges to overcome in this journey, the combination of focused research, standardisation, and collaborative effort provides a potent recipe for success in the pursuit of sustainable innovation.

Session transcript

Moderator:
Yes, good. Okay, thanks a lot. Good morning, afternoon, evening, good night to many of the people here. Thank you very much for coming. This is the GigaNet Academic Symposium, which as tradition has been going since 2008 at the IGFs. 2006? Oh, the first one I did was 2008. Sorry, Milton. Thanks for the memory. Since 2006 at the IGFs, we’re very grateful to the IGF Secretariat for facilitating this meeting and getting us into this jam-packed program for this year with lots of exciting panels going on. We have a very exciting conference for you today as well, and happy to see so many faces in the room and quite a few people online as well. Just a bit of a summary of the background for this year’s symposium. We were set up, we were informed of the date for the IGF, and then went through our rigorous academic procedure for selecting abstracts that emerged as a result of a call. We had 59 or 60 abstracts submitted to the workshop, and we were able to accept only a small number of these. Thank you very much to everybody who participated in this whole process of submitting an abstract. Thank you very much to the members of the program committee who actually spent time reviewing the abstracts. It’s not easy to do this, so thank you very much. much to you as well. And thank you to the presenters for actually making their way here or staying up very late or getting up very early during today. Since we’re running a bit late, I’ll cut my presentation there. But I’m also, sorry, I don’t know, somebody else, did somebody else want to say something? No. I’m also acting as the chair and discussant for the first panel, which is taking place right now. We have, and I’ve made a list, we have four presenters, two of whom are here in the room, and two of whom are on site. We will start with a paper by Yik Chan-Chin from Beijing Normal University. Yik Chan-Chin is also a member of the steering committee for the Giganet Association. So Yik Chan-Chin will be talking about the right to access in the digital era. Then afterwards we have a paper that will be presented by Vagisha Srivastava on WebPKI and the private governance of trust on the internet. Vagisha is from Georgia Tech. And then the third paper will be on internet fragmentation and its environmental impact, a case study of satellite broadband, which will be presented by Berna Akaligur, who is from Queen Mary University in London, and she’s sitting on my immediate left. And then the last paper presented in this panel will be on ICT standards and the environment, a call for action for environmental care inside internet governance, which will be presented by Kimberly Anastasio, who is at American University. in the US and online. OK, without further ado, I will pass the floor to Yik Chan. You now have your 10 minutes to describe your paper. And we’ll move into the next paper immediately after that. OK? Thank you very much.

Yik Chan Chin:
OK, thank you very much, Jim. Can you hear me? Can you hear me now? Hello? Yeah, OK, thank you. Yeah, this is Yik Chan from Beijing Normal University. But actually, I’m in London. So this is a 2 AM morning in the early morning of London time. OK? So my presentation actually is about right to data access in the digital era, the case of China. And first of all, I would like to contextualize this debate of data access in the Chinese context. So first of all, we talk about why the debate of the access, collection, and dissemination of data become the center of the academic debates and the policymaking in China. Because there are three factors contribute to this discussion. First of all, is the internet and the data perceived as the important driving force for economical development in China. Secondly, is the rapid development of platform economy. And also, the mass production of data has raised the governance problem in the storage, transmission, and the use of data in China. Certainly, it’s because the role of a digital social media platform in data access and dissemination has strengthened the public demand of governments to act on the protection of right to information in China. So for those reasons, the academic debate and the national policy of access to data become where we hit it, the center of the policymaking in recent years. years. And also we found that the conceptualization of right to access to data in China and the formal informal rules, which is related to the legitimacy of the right to the public epistemic right to data is quite interesting. So that’s why I focus my study on the relation between the digital access, the right to access to digital data and the epistemic right. And so the data I use in this paper, including the national government’s policy regulation, and also secondary data. And so what is the epistemic right? So this is a right actually closely linked to the creation and the dissemination of law. It’s not only about the informed, but also about being informed truthfully and understanding the relevant of information and acting on it is based for the benefit of themselves and the society as a whole. So this is a concept, a start with, and also the epistemic of right is emphasized under equality, such as equality to access and availability of information, the knowledge equality in terms of obtain critical literacy in information communication. So, and also we need to understand the concept between data, information, and knowledge that interrelated concepts. So data is a set of symbols and kind of the representation of the Royal factors, but the information is organized the data and the law is understood, understood information. So these three concepts are interrelated. So therefore data is the form of law is create social process. So it’s a kind of a social constructed. So therefore it’s interesting. to see how the different social agents participate in the construction of the access right to data as a part of the equation of the social knowledge. So in my paper, I define different type of data such as data packet and price commercial test. And so I’m not going to details because of time limitation. So I define the right to access to data including two element. The first element is a right to access to public information, which is recognized as individual human rights by many jurisdictions and human rights body. The secondary is inclusive right for all member of society to benefit from the availability of data. So this is a mighty definition of access to data in my paper. So at a global level, there’s a different debate about a right to access to data. For example, we got academia recognize the data as a, it’s not a public good, but a leverage resource. But we also get other academic like Victor from Schumpeter from the OII in Oxford, which defined the data as a non-rivalry information and a public good. So it’s open to open access. And we also have like a European commission and the World Economic Forum. They provide different strategy to how to access to data including data access for all strategy or like what the economical platform they want to create a data marketplace service provider. So therefore the right to access to data can be traded in an open, efficient and accountable way. So therefore it’s tradable, data is tradable and they can be managed by a platform. provider, where European Commission’s approach is more like data access for all strategy, and the business to governments data access have a different requirement, and also creation of common European data space for important areas as well. So, look at the Chinese debate. The Chinese debate is interesting because they never treated right to access to data as an independent right, but as a part of the right to information, and also treated as a data as a property right in China. This data is not treated as a kind of right to information, or no, it’s not treated as an individual right, but it’s treated as a right to information or as a property debate in China. So, in terms of the public information, if the data is owned by government, so there’s a different approach. One is that this is the public data, which is public good, it’s owners, it belongs to all people. So, the second approach is the data, public data should belong to the state, and the non-public data, like a personal data, should be subject to personal protections. But there’s no debate about what is the right to access to the personal data, it’s not explicitly discussed, and also the equality nature of the epistemic right, such as equality to access and availability of information and knowledge has not drawn much Chinese debate attention as well. So, data access right, therefore, in China is treated as a property discussions. So, they want to formulate a trading system, so that the data can be traded to generate value, and so that is their approach. And this kind of definition… information actually is also triggered by the government policy project and the utilization of big data. So therefore Chinese academic debate are heavily policy driving in this sense, because this is the debate and that the position of the Chinese academia is heavily triggered by the government’s policy and there is policy driving as well. So very few of the academic debate actually support public good nature of data and the support of data sharing should be the default position and the control of access to data require justification because data is a natural public good. So therefore we can see the debate is pretty different from the other side in the global law. So there is the policy, the Chinese government’s policy regarding to the data, how the access of the data. So from 2015 to 2020, there’s a different action plans and the big data development plans. And also the most important plans is opinions on building a better market allocation system and the mechanism for factors of productions. And also this building a data space system for the better use of data as a factor of production. So basically the policies defined data property right consists of three rights. And so they treat data as a property and the data has a property right. But this property right consists of three rights. So like one is the access right, process right and the exchange right. So the property right is divided into three rights in the Chinese context. So, and in here we want to look at the definition of the how. How do they provide the right to different data? For example, the public data, this is the data generated by the party and the government agents, enterprise and institution in performing their duties or in providing public service. So the access of policies strengthen data aggregation and sharing. So you can access to this public data, but you need to authorize. And also there’s a conditional fee access, but also for particular data, you have to pay for it, okay? And so, but the public data is not to be accessed directly. They must be provided in the formal of models, product or service, but not in original data site. You cannot access to the data set, but you can access to the model or product or service, you know, generated based on the public data. So the second is personal data. The personal data is about personal informations. And so they have process, a process they can collect, hold, host and use of data with the authorization, but those personal data has to be normalized. So to ensure the information security and the personal privacies. So protection of the data, right to data subject to a copy and the transfer of the data generated by them. So you have a right to access this personal data, but you can only obtain a copy or transfer the data generated by the platform to other platforms. So this is the right offered to access to personal data. But for the enterprise data, data collected in the process by the market in production and business activities, they do not involve personal information or public interest. So they recognize and protect. the enterprise right to process and use of data and protect the right of data collector to use data and obtain the benefit, protect the right to use data or process data in commercial operations. And they also regulate authorize of the data collector for third party. And the original data is not shared or released, but access to data to analyze as shared. So government agents can also obtain enterprise and issue data in a coordinate of law and regulation in order to perform their duties. So this is the right to access to enterprise data. So the conclusion is that, first of all, access to data is not a defined aspect of the epistemic right. But in the Chinese context, they have a different but also similar interpretations. So because of the epistemic right in the Western academic literature, they are more from the sociological nature of the creation and dissemination of information knowledge. So the right are underpinned by the normative criteria of equal access, the availability of information and knowledge and the use for the benefit of the individual and the society as whole. So data is treated as a form of knowledge. It’s a non-verbal information good, a public good for the benefit of all. So therefore, open access and sharing of non-confidential data is proposed. So in the Chinese context, epistemic right has not yet drawn any attentions of Chinese academic debate. So the close related concept to the epistemic right is the right to information. But this right to information is approached from a legal perspective rather than from the sociological perspective. So they are stressing on the commercial right and also the public right to information right to the public data. So data is defined as one kind of factor of protection for the national economic development. So this is very tricky, because in the Chinese context, data is defined as a factor of production. You know, there’s a four factor of production, like a labor, lands, you know, and the capitals. But the data is defined as a fifth element of a factor of productions, which is very unique. So, and so therefore, the data is, has a non-variable and the non-sort. So they recognize data is a non-variable, and so it’s a character, you know, but they do not, data cannot be circulated in the market and like a lands, labor, and the capital. But the public good nature of the data has not been recognized in the mainstream academic publication of the government’s policy. So before, because of this, the public good or eco-assets dissemination of data are not mentioned in the public policy making. So under this kind of permiss, so therefore, data collection, analyze, and the process are aimed at unlocking the potential commercial value of data, especially for enterprise data, and define the various kinds of the data are focused on what then we go debate and the policy contextualizations. So therefore, the data, you know, in the data debate, in the Chinese policy and the academic debate, the focus is the right and the interest of data enterprise and not the individual right. The power imbalance between the individual and the corporation and the sharing of the benefit derived from data with the individual user and the data subjects has not been addressed. I think that’s all my presentation, the argument of my paper, thank you very much.

Moderator:
Thank you, thank you. And very close to time, so. we’re starting off well. We’ll move. Thank you very much, Yik Chan. I hope you can stick Okay. Okay. Okay.

Vagisha Srivastava:
Hello, everybody. Good morning. I’m going to talk about Web Public Key Infrastructure and the Private Governance of Trust on the Internet. I know it’s a cool title. All credit goes to Dr. Milton Mueller here in the audience, Dr. Carl Grindle, who couldn’t join us in person but is likely joining us virtually. Hi, Carl. And me, the lovely Ph.D. student you would want to have around. And of course we would like to appreciate the generous grant from the ISOC Foundation for this research. I am going to tell you the story of Internet security. And like all good stories, this one, too, starts with a tragedy. There was a security breach. A company called DigiNotar misissued 500 certificates on the Web. It was later identified as a man in the middle attack by Iranian hackers. But the company took no action for two months into the breach until the Dutch government intervened. I was only found out because one of the misissued certificates was for Google. So what is DigiNotar, what’s a certificate, and why is the story a significant plot point to what I’m talking about today? Okay. So we are all familiar with this. When a user types a Web address on the browser, the little lock sign tells us that the connection is secure. HTTPS that we have heard about enables a secure channel for communication. But the browser still needs to identify if the server or the client is in fact who they are claiming out to be. And it is done using digital certificates. Digital certificates authenticate clients on the Web. Certificate authorities or CAs issue certificates to website operators upon request. So issuing certificates, they’re supposed to verify the identity of the entity making the request. And the certificates then act as a recorded attestation. that the holder is, in fact, who they’re claiming to be. WebPKI is a web-based component that supports documents digital signing, signature verification, and document encryption, such as the certificates using the public key cryptography or asymmetric cryptography. Now, before we get into the details of that, let me first lay out what we are trying to do in this paper. There’s a bunch of literature out there from the technical community and the workings of WebPKI and the security or lack thereof, provided by certificates and certificate authorities. We’re looking at it from the governance perspective. We’re questioning the commonly held notions of public good and its delivery mechanisms. Public good is often provided by the government, private good provided by the market. And we are situating the governance of WebPKI specifically within the framework of public good being provided by private actors. We argue that the production of public good and some non-public goods require collective action, but not necessarily state action. Governments are but one vehicle of providing these goods, not the only one, and definitely not the most efficient one out there. The paper offers an innovative perspective on the dynamics of public production of private goods in the context of internet security. OK. One more slide. We use the framework of institutional analysis. We identify the public good in question. Then we identify the stakeholders, talk about if they cooperated or compete to achieve the said public good. Did they overcome the known barriers to collective action? We then describe the rules within which these stakeholders group institutionalized, and finally use some data collected to assess the efficacy of the institutions in achieving the desired result. That is the enhanced security. OK. Shifting gears again. If there are only two parts. parties communicating over the web. As long as these two parties can authenticate each other, the adoption and the use of encryption on the public web does not require any special form of institutionalized collective action. The hard part is the authentication process when there are multiple servers and multiple clients required in the process. It requires a reliable and trustworthy mapping of the private key holder to the public key. In the WebPKI ecosystem, digital certificates facilitate this mapping. When a server presents its digital certificate, which includes a public key, during a secure connection setup, the client can verify a certificate’s authenticity and trustworthiness. Split key cryptography eliminates the need to transmit private keys over insecure networks. However, it also creates an impersonation problem and certificates solve it for the web. A mismatch between the two, that is the private key and the public key, enables a man-in-the-middle attack. That is something that we saw with the DigiNotar incident. But this brings us to the question, how do we trust a CA to not be a bad actor or, worse yet, a compromised actor? There’s a chain of trust that enables us to trust a subsequent CA, where each subsequent CA or the intermediary CA has to comply with a certain set of policies set by the browsers. The endpoint, a root CA, is maintained in a root store by browsers and operating systems and has to go through a complex vetting process to be included in the root store program. Now, we have established the authentication is public good. Let’s spend a minute understanding why collective action is required. The web ecosystem as a whole needs effective authentication across the board. Security is not a private good, because a compromised certificate or a certificate authority has the potential to affect any website or any users across the system. alone have the incentives to provide it themselves and can be motivated by several factors. Browsers and operating systems cannot be responsible for screening every single website on the web. The digital ecosystem depends heavily on the trust to work and so needs authentication mechanism to be applicable everywhere and does need collective action to be enforced. Who are the stakeholders in the ecosystem? We identify four. Security risks are most concentrated on the top at root stores in the browsers and they have diminishing systemic effects as you go down. There are hundreds of certificate authorities, millions of subscribers who get the certificate and billions of end users and individual devices who rely on these certificates for authentication. According to Mankar Olson, collective action is costly. The coordination and communication costs and the bigger the group, the more the costs rise. The institutional solutions to the collective action problem for WebPKI focus on the top of the hierarchy, that is the browsers and the CAs, and it does not try to directly involve subscribers. The root stores act as a proxy for the end users and the certificate authorities act as a proxy for subscribers. We identify three institutional vehicles, the main character of our stories here. The certificate authority and the browser forum, which we’ll be talking about in a minute in more detail. The certificate transparency, which is an Internet security standard for monitoring and auditing the issuance of digital certificates through decentralized logging and ACME, or the automatic certificate management environment, which is a communication protocol for automating interactions between the CAs and their user servers. Okay. So the certificate authority and the browser forum. Remember the DigiNode slide? Well, I might not have been completely honest when I said that the story started with that tragedy. It started a little bit before that. Narrative privileges. From 1995 to 2005, certificates were being issued with virtually no standardized governing rules in place. The Certificate Authority and the Browser Forum was founded in 2005. But in 2012 was when the forum started actively making rules for the system. Since 2012, the forum has produced a set of baseline requirements for CAs that tackled convergence of expectations between the browsers and the CAs on issues such as identity vetting, certificate content and profiles, certificate revocation mechanism, algorithms and key sizes, audit requirements, and delegation of authority. The baseline requirements have been revised about every six months by means of formal ballots approving amended text. The second part of our methodology involves studying the CA Browser Forum in particular. We first collected data from the forum meetings. This included attendance records, meeting minutes, and we also had 10 semi-structured interviews. To capture the market share, we used random sampling to sub-sample around 2 million domains data from the Common Crawls database. The CAP Forum was described by one of our interviewees as a place for the root stores to coordinate their policies so that they don’t create conflicting policies and to get feedback from the CAs directly. While we do see on the chart that there are more European members than US member organizations within the forum, US participants are more active when it comes to participation. We tracked the activity of different stakeholders in the forum, and you can see an increase in participation following 2017, which was because of an addition of a new working group. Between 2013 and today, the browsers have become. And we also note the active participation of the U.S. federal PKI. There are many economic conflicts of interest between the browsers and the CAs, but we can see from our analysis of the voting records that in 92% of the ballots, a majority of both stakeholder groups supported or opposed a proposal. In only 2% of the cases did the browsers favor a proposal that was opposed by a CA. We see this data was collected in February 2023. We see that Let’s Encrypt, which is a civil society effort to encourage the use of encryption and to automate the issuance of certificates, is dominating the market. This didn’t used to be the case before, but it clearly showcases how automation has led to increased adoption of DV certificates. If externalities caused by poor CA practices are the main drivers of collective action, we should expect to see the gradual homogenization of the root stores across browsers produced over time. We do see substantial overlap in which root certificates and the browsers admit the root certificates into the trust store. We also see a gradual reduction in the number of trusted certificates within the root store over time. The measure of efficacy is interesting here. We see that the encryption and the web has increased significantly over time. We also observed that misissued certificates have decreased over time. However, while the global misissuance rate is low, this is predominantly due to the handful of large authorities that consistently issue certificates without error. The three largest CAs that we identified in the market share, Let’s Encrypt, Cloudflare, and cPanel, signed 80% of the certificates in the data set and have near zero misissuance. mis-issuance rates. Now, perhaps the most important finding, why have private actors taken the lead in security governance in this case? Well, governments are politically structured such that they cannot represent a global public interest or produce global public goods easily. The authority is fragmented, and there are numerous rivalries among them, especially when it comes to cybersecurity. The platforms have a greater alignment with the security interests of their users than national governments. An insecure web environment hurts their business interests, while governments are not directly harmed. Also, they often have a strong interest in undermining encryption and user security for surveillance purposes. The implementation of WebPKI involves an elaborate web of technical interdependencies. Security measures impose costs and benefit upon all four stakeholder groups. Those directly involved in the operation and implementation of WebPKI standards are in a better position to assess the cost and the risk of the trade-offs and make rational decisions. But the government is not entirely absent. We see USFPKI organizations participating actively. We have observed from the meeting minutes and the interviews that EU is pushing for the EIDAS regulation on this ecosystem. We also see involvement of national CAs, mostly from Asia, representing the interest in the forum. Now, why should you care? Well, a lot of times when insecurities happen in the web, the blame is often put on the users, because they engage in unsafe practices. You remember the always proceed option that is available for the certificate mismatch? This is not a perfect system. There are still compromises that can happen because of misaligned incentives, or just oversight because of redundancy of the process. In some cases, these could be intentional, example selling of backdated certificates. But it’s always better to know a little bit more. All of the good things about the internet relies on this ecosystem. For example, in- Ensuring cat photos are linked to a secure server, which they claim to be when you’re surfing the web. The topic is also understudied within the Internet governance community and hence would be of relevance to the scholars present in the room. Okay. So like all good stories, this doesn’t end here. We will possibly have a bunch of sequels. We are planning to do a study about how governments intervene in the system, get in more detail. We also would like to have the measure of effectiveness of certificate transparency that we mentioned in the institutional vehicle part and maybe check out if the impact of automation or the ACME on the system. That’s all. Thank you very much.

Moderator:
Thank you very much, Vagisha, you, again, you did the timing very well. Thanks a lot. We now move to Berna . I will just set up your slides on my computer and in the meantime, I give you the floor. Okay. Do you need your computer? Okay. Thanks. Thanks. Thanks. Thanks.

Berna Akcali Gur:
Thank you, Jamal. So this paper is one of the outcomes of our research project funded by the Isaac Foundation. global governance of Leo Satellite broadband. In that project, we focused on the jurisdictional challenges to the integration of mega-satellite constellations to the global Internet infrastructure. So the report resulting from that study can be found on the website that I’m sharing in our PowerPoint. There you can see the link for a separate ISOC project on Leo Satellite connectivity. The ISOC group assessed the subject from a purely policy perspective, and we joined our forces at times, and I recommend the report as well. Now as we were conducting that study, the satellite broadband industry picked up pace. More and more applications have been filed at the International Telecommunications Union for new mega-constellation projects. While there’s a certain degree of excitement about them, the scientists studying space and astronomy have raised their voices about the impact of these projects on space sustainability and space environment. So we decided to analyze the tension between the competing interests that are universal broadband connectivity for sustainable development and cyber sovereignty on one side, and sustainable use of space resources on the other. Of course, from a law and policy perspective, which inspired this paper. It’s still a draft, so we welcome any constructive feedback. So what is new about satellite connectivity? We all know that space technology satellites have long been a complementary part of the global communications infrastructures. Most often, they have provided last-mile solutions in remote and sparsely populated areas, such as islands or villages and mountains, because these areas are not easily served by terrestrial networks. And also, we shouldn’t forget, we still use them when we are in transportation, such as ships and planes. So communication… satellites are not new. The idea of multi-satellite systems, the constellations, are not new either. Earlier constellations in the low Earth orbit had emerged in the 90s. Orbcom, Iridium, Globstar are examples. These consisted of smaller number of satellites and they provided speech and narrow band data. They were not viable businesses for mass consumption. They were expensive projects and they couldn’t compete with the speed and capacity of the terrestrial networks, so they didn’t really receive much attention. Recently, advances in communications and separately space technologies, dramatic reductions in launch costs, financing by the technology sector, and most importantly, the ever-growing broadband demand, drove a second wave of satellite constellation ventures. These are very ambitious projects with increased number of satellites. Some leading examples are 42,000 satellites planned by the US venture Starlink, 13,000 satellites planned by the Chinese venture Guowang, and 648 for UK and India venture OneWeb. So newness in the sense is in the scale of these projects. So how do these ventures relate to sustainable development goals? As you all know, for most social, economic, and governmental functions, the use of applications enabled by the low latency, high bandwidth connectivity has become even more essential. Low latency is particularly important for the web-based applications that require high speed. Some applications that I can mention are Internet of Things, video conferencing, video games. The new constellations are able to match this requirement because the data travels much faster when the communication satellite is in use. in the low-Earth orbit simply because the distance is much shorter. So, the promise of broadband connectivity with minimal terrestrial infrastructure is almost miraculous from a connectivity as an enabler of SDGs perspective. That is why the emergence of these satellites have been met with enthusiasm in the context of their potential contribution to bridging the global digital divide and global development. But how does the system work? Are these satellites infringing on territorial sovereignty of countries by providing Internet from the skies? Well, we should first understand the technicality behind this to understand how the domestic regulations work. So, the ground stations, they act as a gateway to the Internet and our private networks and the cloud infrastructures. Currently, the distance between the ground stations is required to not exceed about a thousand kilometers. The second component is the user terminal by which the users connect their devices to receive broadband services. These are provided by the satellite company operating the system. Additionally, satellites need an assigned frequency spectrum, a limited natural resource, as the satellites communicate with the Earth through these radio waves. The user terminals will link to the satellite in closest proximity, which may be a different satellite in the constellation at a given time. That satellite will be connected to other satellites, one of which will have a connection to the ground station. Then, there is a cloud infrastructure. The satellite companies will use cloud infrastructure, which is a mutually beneficial relationship as the cloud infrastructures benefit from their connectivity as backup to their existing setup. As I said, the provision of satellite services is not limited to the cloud. within a particular country is subject to that country’s laws and regulations. These are called landing rights and the countries decide the terms of landing rights for themselves. For example, the ground station. For that, the companies will need authorization from each relevant jurisdiction. They will also need to obtain a license to use the frequency spectrum. If they provide their services directly to consumers, they will also likely need an internet service provider license. What is more, the importation of their user terminals will also be subject to import requirements of the national authorities. So the provision of satellite broadband service by a company is subject to a wide range of laws and regulations of the host country. For example, Russia and China have already declared that they will not allow the provision of satellite broadband by foreign service providers. The countries with space capabilities felt that it would be better if the existing domestic control mechanisms were complemented by ownership and control of their own mega constellations. These have been frequently referred to as sovereign structures. Competition is perceived to benefit markets and end-users. So at first glance, it seems like we have more of a good thing. With more choices for all, business models will mature and that should be celebrated. But when we look at the reasoning behind these investments, the governments emphasize their strategic value and the significance of sovereign alternative infrastructure for digital sovereignty and cybersecurity purposes. The financial viability of these ventures is still not certain, so there isn’t much emphasis on that. The digital sovereignty and cybersecurity concerns incentivize countries and regions to align their communication infrastructure and is controlled along their borders. I’m looking at Milton because he coined the term alignment as fragmentation. these ventures are also manifestations of the ongoing fragmentation. Okay, so the foreseeable harms of the new competition to space, particularly the orbital environment, are grave. From launch emissions to orbital debris, the current regulatory framework is simply not sufficient to tackle the problem in time. In time is the operative word here. Due to exponential increase in the number of space objects, space traffic has become more challenging to manage, and more collusions are anticipated. The space environment is becoming more prone to collusional cascading, which means that once a certain threshold is reached, the total volume of space debris will continue to grow. This is because collusions create additional debris, leading to more collusions, creating a cascading effect. Such a catastrophe may render not only the low-Earth orbit, but almost all space resources inaccessible for all, even for future generations. So because there was a competition among powerful nations to each have as many constellations as they could afford, the orbital environment may become unusable for any service, including connectivity and travel. So future generations may be locked in, trying to figure out how to clean up the orbits and restore them. Space resources are limited resources. Orbital space is already congested. Space traffic is difficult to manage, and there is a risk of collusional cascading. So is the promise of constellations, especially the multiplication of space-based internet infrastructure, worth the risks we impose on space environment? So internet governance scholars have known for a long time that advancements in most information communication technologies are perceived in terms of… in terms of their potential impact on global power structures. Megasatellite constellations are also deemed strategic investments, both in terms of space presence and in terms of influence and control over global Internet infrastructure. And I’ll just skip this part. So, the efforts have been deemed to have significant impact on the openness and unity of Internet. But now the same can have an impact on… Anti-fragmentation efforts are deemed to have a significant impact on the openness and unity of the Internet. But now the same can have an impact on the sustainability of space resources. So, we argue that the impulse to compete in the Earth’s orbits, a space that is already congested, should be mitigated in consideration of preserving sustainable orbital environment for future generations. Environmental efforts of global multi-stakeholder Internet governance platforms could inform environmental and sustainable outer space governance efforts, especially as they relate to space-based Internet infrastructure. Thank you.

Moderator:
Thank you very much, Berna. And yes, the timing was perfect. I will now move quickly to Kimberley. Kimberley, you are online. Yes, you are online. Can we please make Kimberley Anastasio a co-host? Thank you. And Kimberley, the floor is yours. Oh, cool. And we can hear you. We heard something.

Kimberley Anastasio:
All right. Can you confirm that everything is working properly? Wonderful, thank you. All right, hello everyone. And thank you to the GigaNet organizers. It is my pleasure to be here today talking to you about a project that is part of my dissertation research at the School of Communication at American University. And this project addresses the intersection of information and communication technologies, ICTs, and the environment focusing on ICT standards. We’re meeting now at the IGF and the Internet Governance Forum to set the environment as the main thematic track for the first time in 2020. And it is definitely not alone in such an endeavor among internet governance organizations. Recently, there are plenty of standard setting organizations, organizations that are establishing rules for how ICTs work and how information circulates on the internet. They’re also turning their attention to environmental concerns and working on the creation of what they call quote unquote greener internet protocols. This paper that I’m presenting today is this first step towards this broader research project on this ICTs standards, in which I talk to people working on ICTs in relation to the environment on the implications that standards can have for enabling and constraining environmental rights. Meaning here, the right to a healthy, clean, safe, and sustainable environment as established by the United Nations Environmental Program. And this is a research that is rooted as part of this communication field called environmental media studies, a field that addresses this overlapping spheres of environmental issues in the production and uses of new media, including ICTs. But still among the researchers that are part of this environmental media studies fields, focus tend to be more on data centers, on AI, on the things that are more closely visible to internet users. And one niche, but fundamental part of ICTs is usually overlooked ICT standards. So I’m now joining two infrastructure studies that deal with this more hidden layer of ICTs. Moreover, we know internet governance scholarship for a long time has been examining internet standardization processes by seeing ICT standards as things that are not just technical but also things that are political. But when we deal about with the values and politics that can be inscribed into internet governance artifacts such as protocols we usually focus on those rights that are more closely related to the digital world like freedom of expression or privacy and data protection. But environmental rights should be considered as this another example of how politics and rights are embedded into internet governance and how then ICT standards may be another venue where the politics of around the environment are enacted. The broader research is an analysis of both the international telecommunication union and the internet engineering task force and the work that they are doing that are related to the environment. For this paper I relied on semi-instructed in-depth interviews that I did with 18 interviewees that are experts that have already advocated for environmental concerns about the internet and its infrastructures or that have done this or they’re currently doing this. So when I talk to the people that are already working on this umbrella between environmental rights in the context of ICTs I ask them about their knowledge about standards and their perspective on where ICT standards could fit the broader agenda and despite most of them mentioning that they have very few knowledge on standards their answers ended up echoing both what is being said on the literature but also on echoed some things that some standard setting organizations are already working on for a while. So echoing the literature interviews situated this debate about ICTs in the environment in these two parallel understandings of the technologies. On one hand digitalization allowing us to enable a more sustainable economy so ICTs being employed basically to tackle climate change and then on the other hand focusing on the negative aspects of digitalization. So how digitalization by itself can also have an impact, be it actually positive or negative on the environment by itself. But in the end, what most of them noted is that to act on this intersection between ICTs and the environment, one should account for both things. So how these standards can help the sector be more environmental friendly, by enabling other things to happen among other sectors, but also how environmental friendly the ICT technologies themselves can be. And then when it comes to establishing what roles these ICT standards could play in enabling to account for both these two things, the promises and the pitfalls of digital technologies, the experts highlighted two main areas of action. So basically establishing a common language or parameters for dealing with this issue, but also establishing mechanisms for accountability. So, and both of these things were mentioned in relation to the standards that would help us avoid carbon emissions in other sectors, but also in standards that are trying to account for or cut down the environmental impact of ICT themselves. And at the center of this discussion on intersection and the role of standards in this case, is basically this necessity for quantification and addressing the materiality of ICTs. And also the fact that these conversations that we’re starting to have more on the standards setting organizations are kind of leading the game. And they come in a context in which the mindset of the ICT sector is one of evidence and consumerism. And we know that quantification is a vital part of what standards is, be them ICT related or not, because the standards are the things that define the procedure. they regulate behaviors, they ensure interoperability, and for that, quantifying, classifying, formalizing processes is key. But when it comes to measuring the environmental impact of ICTs, be it from any perspective, so software, hardware, or networking, this is not an easy task. And this means that even when people do recognize the physicality of the internet and the impact that ICTs can have, there is no simple way to quantify its relation to the environment. Be it from, again, the carbon footprint, the energy consumption, the natural resources extraction, disposability, and things of these sorts. But one thing that we have to keep in mind is that materiality is more than just this palpable thing. As seen on the slide, materiality also refers to this shape and affordances of the physical world, but also the social relations that are part of our lived reality. So we address ICTs as something that is physically located and situated in the environment, although it is surrounded by discourses of immateriality. But to act on this issue does not necessarily mean that we should be stuck if we’re not capable of precisely measuring this entanglement of ICTs in nature, or that we should stop at the measuring phase alone, precisely because we recognize ICTs as something that is relational. An interviewer responded to something similar like that when they identify what they believe to be the root of the problem. The root of the problem being not the environmental impact of the separate devices, products, services, the separate standards themselves, but the socio-economic model behind how society deals with ICTs. One interviewer, for instance, said that standard setting is really important because it allows for environmental best practices to come in and enter its way at the technical level, but that would be in direct competition with the business as usual business model of the entire sector. But some standard setting organizations are already engaging in environmental related discussions, both by creating standards that relate to the environment, but also as organizations themselves engaging in these discussions in other settings. And the two organizations that I’ll be studying further for my dissertation, as I mentioned, is the ITU and the IETF. The ITU has already more than 140 standards that are related to the environment. It has one study group that’s called Environmental Circular Economy that is dedicated to dealing with these issues. The mandate is to work on the environmental challenges on ICTs. And the IETF is something that is more recent, since the ITU has been following the UN Sustainable Development Goals for a while. The IETF is now catching up on this issue as well. It has almost 20 standards that are more closely related to environmental issues. And it also created recently a group that is dedicated to addressing sustainability issues in relation to ICTs among them. Just to mention a couple of examples, the IETF has been dealing with a protocol for Bluetooth to turn Bluetooth less energy consuming from the perspective of internet of things. The ITU has several measurements on the carbon footprint of the ICT sector. And as I mentioned, the scholarship have already established that the standards are these political things that can incentivize or constrain certain behaviors. Two important ICT standard set organizations are already engaging and increasingly acting on environmental matters. And the next step for this research is to delve into the work that they’re doing and try to investigate what areas they are trying to tackle, what interests are also being addressed there, and how can we move forward with this agenda even beyond these two organizations and further down on their standardization process. Thank you very much and I’m available for any questions or comments that you might have.

Moderator:
Thanks very much, Kimberly. Right, we have a few minutes and I will abuse my position as chair and hope that the Daniel, the next, you don’t mind starting a couple of minutes later. Just so that, because we started late. Right, so I will first of all just make some comments on the papers that we heard and the papers that we received from you. Thank you very much for those. Before I try and go with some leading questions and hopefully that will stimulate a bit of a discussion. So if you have questions in the audience, already start thinking about how to formulate them and pray that I don’t raise them first. I’m sure I won’t. Okay, so I will go through the papers in the order that they were presented. So Yig-Chan, who is still online, congratulations. It’s four o’clock in the morning or something, so congratulations. Really interesting paper. The fact that it’s already published means that my comments are moot, but I was thinking of how you could actually take this further. I mean, there’s lots of many interesting statements in your paper and the way you actually position the debates around this and almost show the similarity between the different approaches that you see in the different regions of the world that you looked at. Very interesting. I would have loved to have learned more about your reflections on the sociological nature of that, that whole data divide. So data, knowledge, information, and so on, and see how that fits in. One of the things that kind of touched me in the paper was the way you explained how those differences and those similarities come out. And so I was very happy to read that, and that stimulated a lot of thought in my head at least. However, I would have liked you to have been a bit more argumentative in that sense. You laid out some of the conditions, and you showed that there are differences and there are similarities, and it would have been nice to see how that plays out in different policy debates that are going on. Because I know that the EU has its data for strategy, strategy for data, and I’m sure that that plays, you know, you could do a really interesting policy analysis on that, and that might be a next step that you want to go for, to actually try and unpack how these reflections on epistemic rights and so on actually play out in the policy field. That might be really kind of interesting to see how that comes out, because of course On the one level, there’s a lot of conflictual discourse around the different approaches, but what you show is that there are some fundamental similarities as well, and it might be interesting to do that. I was also thinking that there are maybe other regions in the world that have different approaches to data in that sense, and it might be interesting at some point to also reflect on that, maybe, you know, in the paper, in the next publication, maybe interesting to have a section that looks at global approaches, and maybe I know that in Japan they have a different approach to data, to treating data in that sense, so that might be interesting. Interesting. Vahisha, your paper, and Milton is in the room as well, I could definitely see the questions around the governance of this, and congratulations for that, because I think also in the paper as I read it, I felt that you were not struggling with it, but it was something that you said, okay, I want to look at this as a governance question and not a technical question, but I need to spend lots of time explaining the technical issues in order to understand the governance side. And so although you focus on the fact that you want to do a governance paper, as I was reading it, I felt a lot of the technical knowledge, which was very useful, but actually left little space or mind space for the reader for those bigger governance questions, which are really interesting. I also was thinking a bit about how you addressed, so how you said you talked about your narrative prerogative, and how you addressed the story from 2019 first, but that was maybe the problem that made the context and the issue visible. So maybe that’s how you do that. You don’t have to say, I lied, right? I was also wondering, so you mentioned that there are different voting, you said that actually the platforms and the certificate authorities agree on a lot of things, right? I was wondering, the process leading up to the votes, that would be really interesting to understand, right? And then, of course, you mentioned that it’s a private organization dealing with public goods, and I’m going to ask the question that you asked us to ask you. Can you please explain how government are involved in that, because they are not explicitly involved, but they are involved, right? And that would be interesting to see, because, of course, there are multiple dimensions to these stories, and I would have liked to have heard a bit more, or teased you a bit more in that sense. Berna and Joanna, thank you very much for your paper as well. When I first read the title, I thought that you were trying to do a lot in this paper. It’s covering low-Earth satellite, low-Earth orbit satellites, it’s covering environmental issues, it’s covering cyber security issues, it’s covering quite a lot in the paper. And I found that, at first I was thinking, wow, how are they going to do all this? But you managed, so that was good. I was wondering a bit, when I looked through the paper, I felt that the ordering was sometimes, there were some bits that probably could have gone a bit earlier, in order to help me understand the flow of the paper. So I’ll give you some examples later, but, I mean, for example, you introduced the concept of mega-constellation, and I didn’t know what that was until I’d read two pages later. So things like that. But also, in the way that the argument builds up, I was wondering, Section 4 may be more interesting than Section 3, and vice versa. There are, you’ve mentioned, rightly, the security concerns, right? But I was wondering, so there are also, the security concerns and the sustainability concerns actually cross over quite a bit, I think. Because if somebody were to shoot one of these things out of the sky, and if then the cascading effect happens, I was wondering, you treat them as two separate things. And I was thinking it might be interesting to also show that there are direct connections between them. those two. And then you, of course, you go on and you talk about the ITU, but I know that there have been international collaboration efforts. I know the European Union has been trying at least for a long time with the space policy to develop things, and I was wondering, I didn’t really see mention of that too much, and I thought that might be interesting to bring in, because that then addresses the questions that you had raised in the tensions between the national and the global, right? And there I would be interested, you talked about sovereignty as, or states using sovereignty to say we need our own mega constellation, but then in the end that still needs a coordination effort, right? Unless they want to knock each other out of the sky, right? So that might be, that’s also something that I think you could raise in your paper a bit more, okay? And then in terms of sustainability, it might be worthwhile to clarify at the beginning of the paper what you mean, because I was also thinking, oh, is it more environmentally friendly to put satellites in low Earth orbit than to have routers or whatever data centers on the planet Earth? But actually, no, you meant something else. Okay. Kimberly Rushing. Environmental rights. Thank you very much for this paper. Really worthwhile effort. It’s part of a broader project, and I’d love to know a bit more about how that fits in. I think that could be a bit clearer in the paper. You focus on the role of standards authorities. You look at the IETF and the ITU. I was wondering, in your reflections, do you actually think about the normative biases that are built in to these actors? I mean, you mentioned the work that’s been done by other scholars. that try and unpack those. But right now, you’ve gone through the interviews and you’re looking at those quite literally. And I was wondering if you do that. I think there’s also quite a lot of work, maybe not directly just on internet standards organizations, but standardizations bodies as a whole. And I know you come from the literature that’s looking very much at sustainability and standards. But there may be some work. Also, there was quite a lot of work in the 1980s and the early 1990s published in this space. So that might be interesting for you to look at. I was also thinking, you do kind of implicitly look, or no, you explicitly mentioned it in your presentation. You look at environmental rights in the human rights context. And I think that was very interesting as well. One of the things, I know you’re only looking at standards, but another area where there’s been quite a lot of reflection is on the implementation of data centers and the environmental consequences of those. And I was wondering if some of those debates in the literature might not be interesting. So those are my far too long, but hopefully useful comments. I would like to see if there are any questions from the floor. Microphones have been put out. So if you want to raise a question, please go and stand behind the mic. Otherwise, we’ll go back to the presenters for a quick response. Milton is going to the mic. Go ahead.

Audience:
Just a question about the satellite paper. You talked about the creation of these government-run mega constellations and somehow that’s related to fragmentation. By the way, I agree with Jamal that it’s hard to combine the environmental… global commons Tragedy of the commons aspect and the fragmentation aspect of your paper, but I’m going to focus on fragmentation You know what What are they actually doing are they proposing to not allow other? satellites to distribute signals to their country And what is their leverage for doing that and then they’re going to set up their own? What why do they need to set up their own mega constellation to to do that if they’re only concerned about their own territory

Moderator:
If there are no other questions at this moment, we’ll go back to the panelists should we go back in the order You can did you want to mention something?Vagisha

Vagisha Srivastava:
Thank you for the comments Because of lack of time I’m not going to address all of them But I think the voting process that you asked about was interesting me when we were going through the ballot readings and everything also the interviews What we learned about was that a lot of formal language that goes to voting is already agreed upon and the consensus mechanism It’s built pre voting or pre setting the language itself that could be one of the reasons why a lot of these votes are Non conflicting, but it is still interesting to see how Sort of sees react to the browsers react to the process itself Do you have any specific question that you want me to answer?

Berna Akcali Gur:
Yeah, okay, Joanna may add to my comments if she I I think she’s still online. Yes, okay, so Jamal, thank you for your comments and I agree, we are trying to address a few important topics all at one in one paper and I’ll take a look at your recommendations about the EU space policy. I was thinking that the EU space policy and the fact that they are trying to also deploy their own satellite constellation may be a contradictory move because I think there was a paper, there was a EU paper, research paper represented to the parliament saying that the EU doesn’t actually need to own a mega constellation for purposes of access but they still thought that it was important from a strategic and security perspective but I should maybe add that to the paper. And about Milton’s question, so from my understanding of fragmentation, there are different manifestations of fragmentation and one manifestation is through government policy and regulations where the governments try and establish control over infrastructure and the components used for that infrastructure. So decoupling at the 5G infrastructure, for example, was an example of that, whereas the groups of countries have refused to use each other’s technology for cybersecurity reasons and of course there was deeper geopolitical motives behind that as well. So when I look at the government papers justifying investment in these mega constellations. which are elaborate infrastructures, the governments refer to them as sovereign infrastructures that are necessary for cyber sovereignty and cyber security reasons. And so it is from their policy papers that I see that they see these infrastructures, although they are not terrestrially located within the land, control of these infrastructure is still by the companies that are located within their territories. So they are very much seen as a territorial infrastructure from those that can deploy these constellations. So what about the others? For the previous research that we had done, the countries that cannot have their own mega constellations but are planning to use them, see data governance, for example, as their major concern. So, for example, the gateways to the internet, the ground, the gateways to the internet, the ground infrastructures that are need to be, that you need to have every 1000 kilometers. For example, the countries were saying that if we are going to authorize services of these mega constellations, maybe we would like to require them to have a ground station within our territory, even if they don’t need one, even if there is one within 1000 kilometers. And it is to control, the intention is to control cross-border data transfers. And so it is still, the intention is to control the cross-border data transfers and to maintain the control that they already have or extend that control in accordance with their policies that are still developing as the geopolitical tensions intensify. So I hope that was, that answered the question.

Moderator:
I think you have to, Joanna, did you want to add something? Nothing further from me, okay, perfect, Kimberly. Okay, I’ll try to be very fast and say

Kimberley Anastasio:
thank you very much for your comments. Jamal, you mentioned three things that are the things that I’m currently working on, which I think it’s very appropriate as feedback. Yes, I am trying to now situate my study better among studies that deal more with standardization as a whole and not just standardization from an ICT perspective. And also just to explain a little bit further this project, the bulk of the project is based on a methodology that involves the content analysis of the almost 200 standards that have been either approved under discussion or rejected in the two organizations that I am analyzing and interviews with the participants of these organizations, so the ITU members and the IETF members. But in order for me to properly understand the work that these organizations are doing in light of the possibilities for the ICT standardization sector as a whole, I felt it was needed for me to come up with a framework of action, not only from the literature on the environmental impact of ICTs, but also from the perspective of those working on the ground trying to build this agenda in international organizations and spaces like that. So that’s where this smaller project fits the broader one. It is to help me come up with this framework of action through which I’ll then analyze how two particular organizations are engaging in this matter. But thank you very much, and I’ll wait for your further comments on the paper. So thank you, thank you all.

Moderator:
Right, thank you very much for all of the interesting papers, and hopefully. This has, well, I think this has been a great start to the symposium, so thanks very much to all of the speakers and all of the paper writers and everything like that. Thanks a lot. I will now ask you to leave the floor. Should we just leave it here? Yeah? You take it from me. Danielle, could I, I think, given the interest of time, we won’t have a five minute break and we’ll move straight to the second panel. Is that okay with you, Danielle? Yeah, okay, we can have a bathroom break. You will be timed. So if you need a couple of minutes, just use a couple of minutes and then otherwise we’ll get back to you straight away. Okay. Well, I was gonna say you can sit, I’m gonna sit down there. You don’t need me, do you? No, okay. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I’m Danielle Flonk. I’m an assistant professor in international relations at Hitosubashi University in Tokyo and I’ll be chairing and discussing this session. Today we have Nanette Levinson who’s presenting on institutional change in cyber governance, Jamie Stewart on women peace and cybersecurity in Southeast Asia, Kamesh and Ghazim Rizvi on making design and utilization of generative AI technologies ethical. Basically everybody gets ten minutes to present after which I will give five minutes of feedback. Nanette? Is Nanette here? Okay, go ahead.

Nanette Levinson:
Yes, can you hear me? Yes, perfect. Thank you. Good morning Kyoto time, good evening my time, good day to whatever time zone one may be on. The papers that were just presented in the first panel set the scene I think beautifully. They were fantastic papers, wonderful discussion. I’m going to share my screen now. I believe that’s working, excellent. I’m going to share with you some work from the past year of a project that I’ve been working on for the last four years. I’ve been researching the United Nations open-ended working group dealing with cybersecurity that began in 2019 to 2021 and the second rendition continues now and actually it is due to go until 2025. And as we all know, this has been a particularly unusual time period punctuated by a pandemic and the war in Ukraine. What I’d like to do in my presentation is focus just on this past year, 2022 to 2023. I’m going to share with you a few of my key research questions, my major findings, several of them, and just very briefly some thoughts on the future research in this arena. I want to highlight three research questions. I’ve been thinking about the field of internet governance for a number of, or at least several decades, and I wanted to have a chance in this paper to take a long-term view, thinking about institutional change using various disciplinary approaches. And I was particularly interested in what could be called deinstitutionalization processes in cyber governance. The organization on which I focus within the discussions at the Open-Ended Working Group is a proposal for something called a program of action, which involves a more regular way to include other stakeholders, stakeholders other than governments, as a part of regular institutional dialogue related to cyber security at the United Nations. In the paper, I formulate a cross-disciplinary approach to these analyses, and I ask the question, how do the findings from this longitudinal study of the Open-Ended Working Group relate to work on institutional change? And further, I ask what possible catalytic factors could be at work related to such changes? And in order to do that, I go back to some work that I did a little bit earlier, where I looked at institutional change indicators, and I want to highlight here here, three of them. First, an indicator for institutional change or incipient institutional change is the absence of an authoritative analogy or the presence of inconsistent isomorphic poles. Second indicator, a change in the legitimacy of an idea and a change in the rhetoric related to it. Third indicator, and these are not sequential, they’re rather a chaotic continuum. The third one is the emergence of a new variety of organizational arrangements consistent with a new idea. And all of this, all of these indicators I look against the setting, the backdrop of increasing uncertainty and turbulence in the environmental setting of the open-ended working group and indeed major geopolitical pulls. Here are a few of my findings at a glance. My earlier work on the open-ended working group from 2019 to 2021 noted the presence of what I called an idea galaxy. And what I mean by that is simply a cluster of specific words that appear near one another. And the subsequent positioning of these words next to or very near to a value or a norm that is already more generally accepted. So in 2019 to 2021, I discovered the following words, human rights, gender, sustainable development or international development or developing country and less frequently non-state actors or multi-stakeholders. And they often were linked both in oral presentations and in written submissions. And I use content analyses on all of these. They were most often linked to sections dealing with capacity building. Interestingly, the 2021. open-ended working group final report, which was adopted by consensus. And again, this echoes some of the discussion about consensus in standards organizations highlighted in the earlier papers. Interestingly, those words were adopted by consensus in that 2021 open-ended working group. But what has occurred in the past year, 2022, 2023, is a fascinating development. The same idea cluster appears in many submissions, many oral presentations, many informal sessions with other stakeholders, but there also appears another opposing cluster or idea galaxy that I term a dueling idea galaxy. Let me say more about this. We remember the idea cluster that was accepted by consensus in 2021. This appears in much of the discussion and was going to appear and did appear in draft versions of the annual progress report that was supposed to be adopted by consensus at the fifth substantive session just a couple of months ago in New York City at the United Nations. However, interestingly, a dueling idea cluster was introduced on the very last day of that discussion in opposition to accepting the report with those words from 2021 in it as a consensus agreement. And instead, the Russian delegation, along with, I guess, the Chinese delegation, Belarus, and maybe four or five other countries, proposed or said that it was not going to go along with consensus, that it strongly wanted and it had a rationale. And I put this in italics that their idea cluster was rewarding such as convention. or treaty. And this really signified their commitment to the development of new norms in the cybersecurity area. And it also signified opposition to this program of action idea as a part of regular institutional dialogue. I do wanna point out that the idea for a treaty was not new in 2022, 2023, it appears throughout discussions. But what is new is its placement in direct opposition to the first idea galaxy above, the one that was adopted by consensus in 21. These dueling clusters reflect the presence of catalytic factors, especially the war in Ukraine, and they provide indications of potential institutional change and increasing turbulence, possibly marking the end of a long cycle of internet governance trajectory that included roles, even though appropriate roles, quote unquote in certain terminology for non-state actor stakeholders. So let me conclude and talk a little bit about future research. The outcome that I just alluded to of the 2022, 2023 discussions in terms of ultimately getting consensus on the annual progress report of the Open-Ended Working Group that was just submitted to the General Assembly, I guess in September, went down to the very last moments of the very last day of that final fifth substantive session. And the only way that the consensus was achieved was by the Open-Ended Working Group Chair, Ambassador Garfour, who took a suspension and went around to do informal negotiations and he solved the dissensus by what he termed in his words, quote, technical adjustments to assure the consensus. And the delegation head termed his technical adjustment as footnote diplomacy. Very quickly, the chair crafted two separate independent footnotes, and I call these balancing the dueling idea on galaxies. Each of the footnotes gave a small amount of recognition to each of those idea clusters and set the stage, of course, for further discussion in 2023 to 2024 open-ended working group ahead. And there are many dates set ahead and discussions related to this topic. So in sum, there are indications of potential institutional change. My project is going to continue to identify any emergent or disappearing idea galaxies in the year ahead. These relate to those conflicting isomorphic poles that I began with as indicators of institutional change. And I hope to be able to use, now that we are primarily post-pandemic, a more mixed methods approach to capture the more individual level idea entrepreneurship in these turbulent times, times that continue to catalyze change processes. And with that, I’m gonna turn the floor back to our chair. Thank you.

Moderator:
Thank you, Nanette. I really like this paper. I’m gonna give feedback now and then we go to the next paper. So I really like this paper because it addresses the big questions in global internet governance. And it looks at recent developments in an important institution, namely the open-ended working group. I have two broader feedback points, one on theory and the second on empirics. So on theory, a number of things, I think, could be further clarified. First, you use a lot of concepts, especially when you set out the different indicators of institutionalization and deinstitutionalization and the stages of institutionalization. So, do you really need all these concepts, such as there was habitualization, objectification, sedimentation? Many of these concepts do not come back in the analysis, and I would only focus on those that you actually need for your analysis, and define more clearly what you mean by them. Second, you use institutionalization and deinstitutionalization processes as a binary, but what about the literature on contested multilateralism or counterinstitutionalization? There authors emphasize competitive regime creation, regime shifting, so there’s more than just making and breaking of an institution. For instance, parallel regime creation, right? Like the Open Ended Working Group was an alternative to the UNGGE. Institutions can gain in relevance, or lose relevance, or even become zombies. So is this binary really maybe too limited? With regard to empirics, the findings address three main categories, emerging technologies, crises, and idea galaxies, but where do these categories come from? Why did you pick these and not others? And how are they theoretically related to institutionalization? I think the section about idea galaxies is the most elaborate one, so it’s clear here which topics you focus on, however, I think you could elaborate more on why you focus on certain ideas and not on others. For instance, you focus on issues such as gender, human rights, sustainable development, but why not on other issues such as democracy and equality? Also I think this empirical section could be a paper of its own, so you could consider focusing the paper on idea galaxies only, and thoroughly setting out your theory and operationalization, and then things like emerging technologies and crises could function maybe more as scope conditions to competing idea galaxies. Thank you. I give the floor to Jamie.

Jamie Stewart:
Hello, everyone. I hope you can hear me well. Yes. Wonderful. Thank you very much. Let me just start my presentation. Thank you all for having me here. And I do deeply apologize for being remote. I was hoping to be there in person but was unable to make it. I’m Jamie. I’m from UNU in Macau. That is the United Nations University. And I’m a senior researcher and team lead there. I’m going to be talking about something that’s quite closely related to the presentation of Nanette. But it is a little bit of a different focus. So it’s infrastructure that would be in the stratosphere. And then finally, with SDG 10, reducing inequality. So it would help to reduce the urban-rural divide and also the gender inequality in use of internet services. Now, in terms of the ITU process, we have the World Radio Communication Conference, which is coming up in November and December of this year in Dubai. There, we’re going to be discussing ways of allowing HIBs to use additional frequency bands. It’s not an opposition to technocentric views of cybersecurity, which have a focus on protection of technical systems and networks. But rather, it’s about extending that focus to go beyond technical systems and think about cybersecurity as ensuring expression and exercise of human rights, particularly around access to information and freedom of the press. And this sort of experiment is a… …described in the slide. This slide shows our company vision. And regardless of which level we look at that at, the national level, the organisational level, even the individual level, these things, that protection should be treated as a mechanism for which we should treat human security and protect human rights. So in this work, this is a piece of research that was done in partnership with the UN Women Regional Data Centre on the Asia Pacific. We centralised the concept of safety and wellbeing and looked at how cyber security practices, particularly within civil society and those who are working in the space of human rights defence, can threaten or disempower users of technology. And so this is also working beyond human factors within cyber security, which indicate to us that people and their behaviours, their thoughts and feelings are important for cyber security practices. That is a component here, but it’s not the central element. The central element is the protection of people and human rights as the function of cyber security. And this is really nicely supported by the Association for Progressive Communications, which have come out with a definition of cyber security that centralise human rights and suggest that cyber security and human rights in and of themselves are complementary and mutually reinforcing and interdependent, and therefore we have to pursue them both together to promote freedom and security. So we can take now the foundation of human-centric cyber security, and then what we did is add a gendered lens on top of that, because what we’re interested in is cyber security as a function of the WPS agenda and how we can support women and girls within the context of peace and security. So as I’ve mentioned already, the cyber security research tends to focus on the technical. And we are interested in taking human factors into cybersecurity, but that is both understanding psychological behavioral factors as they shape cybersecurity, as well as a focus on human rights, harms and safety. Alongside those two critical elements. We also recognize that gender fundamentally shapes cybersecurity. Oops, excuse me. And that is because for a few major reasons. The first is that there are gender differences and access and uses to technologies as well as interactions in online spaces. All of these things influence cybersecurity posture and cyber resilience. We also know from a lot of work that’s been that’s been happening within the genders, gender and violence space online, is that online gender dynamics tend to perpetuate power relationships that prevalent offline. So those masculine masculinized norms and how they influence social relationships replicated in online spaces. And we also recognize that women experience distinct types of online violence, and that these types of online violence are more persuasive for women than they are for me. This is all alongside the gender digital divide which I’ll talk about a little bit more. So what does cybersecurity look like in Southeast Asia. Well, the rapid expansion of digital technologies and internet connectivity within the region as well as the variance in terms of internet connection across different countries and development across different challenges. So what we see is that there are some countries within the region which are highly prepared and doing a lot of a lot of really critical and novel work in terms of governance in this area and others that are not the OHCHR just this year released a report on on cybersecurity within Southeast Asia, and what they found what they said. was that the regulatory instruments that are being developed within the region, where there is a high level of investment in surveillance, and in particular are increasing what they considered to be arbitrary and disproportional restrictions on freedom of expression and privacy. And there were six key issues that they suggested were relevant for the region. And I’m not gonna go through these in a lot of details because there’s quite a bit for me to cover in the presentation. But what I will suggest is that these critical elements spreading of hate speech, coordinated attacks, technology surveillance, and restrictive frameworks, criminalization, and internet shutdowns. I would suggest those who are interested to read this report because it’s very enlightening. And as I said, this is quite aligned to the conversation, the discussion of Nanette, where we talked about broader conversations that might be in opposition to human rights. One of the things that’s come up recently is that the general assembly have expressed concerns over these broader consensus that is thinking that cyber crime and this legislation might be misused against human rights defenders and endanger human rights more generally. So we see this a lot in the recognition of what’s happening with journalists around the world and their freedom of speech. Sorry, I’m running through things relatively quickly because I know I don’t have a lot of time. We focused on women, civil society, and human rights defenders in the region. And this group are very disproportionately affected by cyber attacks. And that is because they’re working with marginalized groups in sensitive politicized topics. They are often not well protected by laws and regulations where they exist. They have little say in those laws and regulations. And sometimes, and we know this from direct case study work, that are actually used, those laws and regulations used to directly. harm them and they face a gender digital divide meaning that they’re less represented within the cyber security field and technical roles and therefore they’re less likely to take that into their protect zone. So we wanted to look at cyber security risks and resilience with the goal of promoting human and digital rights of women and girls in Southeast Asia and what we did was quite a complex project that involved a review of the national and regional context. We did an online survey with those who are employed in civil society organizations advocating for women. We interviewed a whole range of women human rights defenders but specifically those who are working in the space of digital rights as well and then we conducted a cyber audit. I’m not going to be talking about all of this and the report will be launched probably early next year so those of you who are interested can contact me about that. I just wanted to really briefly go over something that I think is of quite a lot of importance. This is not comparing it to other regions around the world. What this is is trends in legislation that are happening within the Southeast Asian region and the types of legislation that the amount of legislation within cyberspace that is happening. So you can see there in terms of the top figure the year there was a large increase 15. This has collapsed across countries of new legislative and regulatory frameworks that came about and there were five in 2022 but this the count here was based on the research. So there was a lot of new legislation that’s happening in this area and some of it looks positive but it may not be necessarily used in the same way for all people. So what we know about these laws in Southeast Asia is that the increasing number of laws and the type of laws that are happening allow for surveillance, search and seizure and there are a whole variety of as a set of case studies around this where there’s targeted monitoring including CCTV cameras, collection of biometric data, the the surveillance of protestors and taking the photos of protestors and using AI. Yep, great. I’ll rush through the end. And the using of those types of technologies in order to target human rights defenders. So we know that all of these, I won’t go through them in detail, have a lot of impact specifically on human rights defenders. Again, I won’t go through this. Basically, what I wanted to say more generally from this data is that there are, as I said, there’s variance in the way that gender equality and internet freedom is enacted across Southeast Asia. And even in places where there are high levels of cybersecurity frameworks, they don’t necessarily function in the same way for women’s ESOs and human rights defenders. So needless to say, in our research, we found that technology was actually at the heart of the work that civil society are engaging in, or like our life or the life of our work, which was suggested by women, and that social media was a critical asset for their functioning, which was also a place where they were directly targeted. They faced a huge variety of cyber threats. And we did do some comparison to say that there was high levels of online harassment, misinformation, cyber bombing, and a huge number of our sample had false information spread about them. We also found that there was less cyber resilience amongst these organizations and human rights defenders than what we then what we would hope that there’s some that half felt prepared could respond and recover. But that also means that half did not. And we really need to what we found in here is that we need to decentralize the constant new things, but actually allow people to use the features of digital digital technology in safe and secure ways. The content and the cyber attacks that were faced by women CSOs and women human rights defenders were highly gendered. And I’ve got some quick case studies here, the photos taken without consent and fabricated the deep fakes and used to dehumanize and discredit human rights defenders, that there was an idea of that the human rights defenders should expect to experience violence online and harassment. There were deep threats and discrediting of feminist movements and silencing and removal of safe spaces for discourse. I really wanted to just focus on the last recommendation before I must finish, which is that aside from some of the organizational level recommendations that we’re putting forward, we also need to ensure and what we’re recommending from this work is that there’s gender responsive means that human centric for recourse against cyber attacks and threats. And this is made particularly difficult within a context where there is the perpetrators of those cyber attacks may likely to be state actors or coordinated attacks that are sponsored and highly or well-funded. So we need to make sure that our frameworks are aligned with this and where we’re endorsing these global frameworks, they take this into account. Thank you very much for everybody for your time.

Moderator:
Thank you, Jamie. I think that’s super important and interesting research and it’s a piece that I could relate to a lot personally, so I really appreciate it. I basically have three feedback points, one on the scope of your concepts, one on actors and one on future steps. So with regard to scope, what do you actually include in your definition of threat in your research? So in your introduction, you speak of cyber attacks. Later, you also talk about digital literacy, misinformation and these are all different kinds. of threats, right? The mechanism about defending yourself against a cyber attack such as doxing or stalking is, I would say, way more direct and immediate than reducing misinformation. So should you not make a categorization of the type of threats and how this impacts marginalized communities? How does the causal mechanism of threat differ here and, by extension, how does this call for different types of regulation? At the same time, how far do you think regulation can actually reach? At some point you made a very interesting point that cybercrime legislation is being misused to target human rights defenders. So I think this is a very relevant and interesting point and I think you can make a similar argument about harassers sometimes weaponizing anti-harassment tools built into digital platforms basically to harass other people or that they pick certain platforms with the most limited options for moderation. So how effective do you think regulation actually is and at the same time what’s the alternative? Then on my second point about actors, it remained unclear to me who the actors in this piece really are. For instance, it would help if you could give some examples of women human rights defenders, women civil society organizations. Also maybe some anecdotes at the start could really help the reader understand what type of cases you’re talking about. A similar thing applies to threats. What actors are we talking about here? Because there’s lone wolves and trolls but there’s also coordinated attacks by groups, maybe political groups. So how does this affect policy recommendations? And then finally on future steps. So currently your recommendations are quite broad and I think you could make it a bit more concrete. For instance, you said that social media is a critical tool for operations but also increases risk exposure. So what instances of risk exposure on social… media did you see, and how would you recommend tackling this issue? Thank you. I give the floor to you guys.

Kazim Rizvi:
I hope I’m audible. Thank you to the chair, thank you to GIGANET and IGF for hosting us today in Kyoto on a very lovely morning and thank you to Kamesh for making it just in time. Unfortunately, he had to miss his flight but he got a new flight today and he made it in time so that’s great to see. So first of all, just very quickly introducing myself, my name is Kazim Rizvi, I’m the founding editor of The Dialogue, we are a tech policy think tank based out of New Delhi, India and we work across multiple issues, one of them is AI and we are really excited to present this paper which is authored by Kamesh along with his colleagues who are in India, most likely they are enjoying their Sunday morning unlike Kamesh and me but I think we are having a better time presenting this paper. So very quickly, we don’t want to waste too much time, so very quickly what is the objective and what are we trying to do here and as you see on the titles, this is basically looking at enabling responsible AI in India and we have come up with some principles and these are principles which we believe need to be implemented at different stages and the uniqueness about this paper is that the principles cut across the development stage of AI, the deployment as well as the usage by various actors and consumers and I think that’s where the uniqueness lies in the paper and that’s what we are trying to do because this has not been discussed in India at least till today and that’s the idea for us to sort of work on this paper. So if we move to the next slide, just sort of going through the outline very quickly and I think in the last year or so, we’ve been accustomed to hearing the word AI a lot more, right, with the rise of generative AI applications, most of you in this room and listening to us online are having a direct interaction. And we see AI proliferating across a lot of different ecosystems, such as technology, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI, AI. And what we see as researchers is that the technology is moving away from just a B to B to a B to C technology, where consumers are directly interfacing with AI models to, you know, help them with their daily tasks, professional duties, et cetera. So, you know, we see a lot of algorithms. In many ways, you know, the term we coined is algorithms are the atom of the Internet, right? You cannot live without them. And they make or they sort of create the structure of the Internet, which is the modern Internet as of today, and the services which are provided. So, for example, you know, I have a cat in my house, and I go on social media, and then I’m seeing multiple options to, you know, buy different type of food and stuff for the cat. And I’m not saying that you can’t do that, right? You know, you can buy a specific kind of food, but if you go to social media and sort of post some pictures, then you’ll be, you know, getting different, different type of suggestions and interventions. So it’s really taking over in terms of giving you ideas, giving you inputs from the music you listen to the kind of, you know, the places you want to visit. It’s really everywhere in our lives today. So while it’s doing a lot of good things, there are certain challenges, right? So, you know, we have a lot of challenges, and I think, you know, as we move to the next slide, what we’ve tried to do here in this paper is understand those challenges, and identify what are the implementing frameworks, governments, scholars, development organizations, multilateral organizations, tech companies have to work towards as well as civil society, right? So what we’ve done is we’ve sort of mapped out a certain specific ways of identifying responsible AI. Maybe we can move to the next slide. Yeah. So in this paper, we’ve mapped impacts and harm, and what we’ve done is we’ve looked at AI at the development stage, so the design and development at the algorithmic development model stage, where we’ve analyzed what are the harms which could take place when you’re designing the technology, when you’re really coding it, when you’re sort of coming up with algorithms, when you’re collecting data, what kind of data you’re collecting, how should you collect the data, what is your authenticity, etc. So that is one stage, which we’ve understood. And the second stage is the harm stage, which is the post-development development deployment stage, when the technology is deployed by industries. It could be, you know, horizontal industries such as finance, education, even environment sustainability, social media, whatever industry is using the technology, there are certain harms present over there. So how do we protect ourselves from those harms? So these are the two stages which we’ve come up with, and again, this is a very unique approach because most of the principles which you see, be it the OECD principles or the UNICEF principles or different multilateral principles or bilateral principles, they’re mostly focusing on the deployment sort of stage, and this is something which we have sort of figured out that design, deployment, and development, all three stages have to be met, and that’s the focus for the paper. If you go to the next slide, it’s pretty much sort of summing up, you know, what we are doing. So three stakeholders, which is the developer, and then you have the deployer, and then the end user, the end population. What are the principles for the end population as well? So, let’s say if you develop a health tech application, there are principles for the technologist, the coders who have designed the application. There are principles for the hospitals, clinics, doctors who are using the technology, and then there are principles when it comes to

Moderator:
how consumers are interfacing with the technology, and how do you protect them as well? So, these are the three stages and the stakeholders. So, then we’ve really mapped these harms across the AI lifecycle, and over here, Kamesh, if you want to quickly come in and talk a little bit about how we’ve done these mapping of different principles and what those principles are. I hope I’m audible. Yeah, I guess I am.

Kamesh Shekar:
So, thank you, Kazim, for setting the context for the paper itself. So, just coming from where you left itself, what the paper is trying to do, and why this is a unique way of looking at things, is basically most of the… Yeah. Basically, most of the times when we are, most of the frameworks which are available outside there is overly concentrated on the risk management which comes at the AI developer level. But what we have went about doing is that, is looked into a 360 degree approach where we wanted to move beyond the developer and ask a question about if at all a developer is designing a technology ethically, does that mean that when a technology is deployed or used, there will not be any fall through the cracks happening? So, just to answer this question is where we have come up with the model of principle-based ecosystem approach. And the model is basically talks about, as Kasim mentioned… is mapping all the principles for various stakeholders who come within the ecosystem itself, such that collectively, we could actually ensure that some of the adverse impacts that we have mapped doesn’t happen. So firstly, what we have done is came up. Firstly, what we did is we took five adverse impacts, and we chose exclusion, false prediction, and copyright infringement, privacy concerns, and information disorder itself. Why these five is basically because these are the most top five aspects which are talked about when it comes to AI implications itself. But this is not an exhaustive list. This is just a start of what we are doing. Then what we went about doing is, as Kazim already mentioned, we tried to look at impact and harm. For us, impact and harm is merely is this that impact is just a construct of a harm which could happen later, and how much you are aware of that. And harm is obviously exposed itself, and the actual harm happens. So our ideology behind the slide itself is this, how the first aspect of our paper itself is that. Whenever we talk about exclusion or any of these adverse impacts, we don’t really look at it from the granular level, where there are different stakeholders involved at different stages of the AI lifecycle itself, contributing at the different levels, which accumulates into something like an exclusion happening. So we went about going and mapping all of those impacts and harms, which occurs at the different stages of the lifecycle of the AI itself. One important aspect here, if you could look into the slide, is we have two important additional stages that we have added, which is the gray, which is your actual operationalization, which is at the deployment level. And then we went about doing. it at the direct usage, which is what Kazim mentioned about B2C implications coming into the picture. So next slide. Yeah. I think we skipped one. Now it’s working. Now it’s working. Now it’s working. Yeah. Now it is working. So yeah. So yeah. This is the one. So now that we know impact and harms are mapped, then what the paper goes about doing is mapping various principles that could be followed by different stakeholders at the different stages of the lifecycle. And here, if you could see, these are some of the principles which have been extracted from globally available frameworks, like your OECD, UN, and EU, and et cetera, and stuff. And also India’s G20 declaration, which also speaks about some of these principles. In addition to that, from our research also, we have suggested some new principles. So after principles, what we go about doing is that is the operationalization. Here the unique aspect that the paper tries to do is that when we talk about human in the loop as a principle, most of the times, we just use the term as passed by. But when it comes to operationalization, that particular principle means differently at different stages of the lifecycle. And that exact difference is what we wanted to bring out from this paper. For example, if you could look at this, at the planning to the build and use stage, human in the loop really means that you want to engage with your stakeholders, and et cetera, and stuff. Whereas for the actual operationalization stage, it could mean that you have to give a human anatomy to people, a subjected human anatomy to people, where they could also take some decisions against whatever the AI decision has been given. So we have brought out such differences into picture within the operationalization. Um, now the, now that impact is done, operationalization, sorry, principles are mapped, operationalization is done. Finally, the paper to just give a holistic approach, we also talk about the implementation which comes from your government here because like our research is extensively in Indian context. So we went about looking at like, you know, what is, um, what can be done by the Indian context in terms of like implementing such a framework where we look at like a domestic coordination, which is important within the legislations. And then international cooperation is important because various like, um, aspects are happening at like different, uh, um, you know, um, institutional level and like jurisdictional level and like bilateral level, et cetera, and stuff. Also like India moving towards a chair of being GPA, I guess like this is like this paper adds a great value in terms of starting that conversation, just one minute. And finally, we also talk about like establishing a public and private collaboration in terms of like how we can implement it. And like, this is something that like as an organization, we keep pushing in terms of like it not necessarily has to be something at the compliance level. It can also come in the level of like, you know, making it like a, you know, value proposition for the businesses to take it.

Moderator:
So I’m going to cut you here, I’m sorry. Thank you so much for your presentation. I think you bring up a very relevant question, um, that addresses a lot of blind spots in current academia. Um, and instead of looking at only developers, you also look at deployers and their role in responsible use of AI. And basically you have three main points of feedback, uh, one on the focus of your argument, one on the narrowing down of concepts, and one on your causal mechanisms. So with the focus of your argument, like I said, instead of looking at developers, you look at deployers. But I wondered why not also focus on, uh, users, like end users? So you think users, like, I don’t think you think that users have no role to play. in responsible and ethical use of AI, right? And especially, since you talk a lot about generative AI, this is often steered by end users. So what is your perception on their role to, like, on their role that they play in the responsible use of AI? Second, on narrowing down concepts, I think often you can make your argument more concrete. So for instance, on page 16, you argue that the AI solutions might be producing an error or may be designed to capture some biased parameters to produce a suggested outcome. However, real-life harms of such outcomes only translate into action when AI deployers blindly use the same for making real-life decisions. So in this case, I was like, OK, but then what do you mean by real-life harms? What do you mean by real-life decisions? What do you mean with AI solutions, you know? So this sometimes gets so broad that it could mean anything. And I think sometimes specifying what you mean would actually help making your argument. So arguments often remain quite abstract. And I think you can make it more concrete by basically defining what do you mean by AI? What do you mean by AI solutions? And just mention a couple of examples. And then finally, on conceptualization and causal mechanisms, we saw the figure, as well, on the AI lifecycle. I had a number of questions about this model. So on a more general level, it kind of remained unclear to me where this model is kind of derived from. Where does this come from? How did you arrive at this model? So I think you need a bit more like, OK, what is already out there? And what do we use to come to this model? Second, you argue that you want to focus on deployers, but the largest part of the model is still developers. So it’s not completely in line with the argument that you’re making in the paper. You say, OK, everybody focus on developers. We focus on deployers. But then in the model. in the AI lifecycle, it’s mostly developers. So what really is the role of deployers in this model? And then finally, I thought it was interesting, there’s this, like the top two categories were like exclusion and false prediction. But there was no impact on end users. And I wondered why. Because I thought like, there’s a lot of impact on end users if we think about exclusion and false predictions, right? So these are my points. I would like to open up the floor if people have questions or comments or… Yes. And then after we collect, we go back to the panel. Anybody else?

Audience:
Yes, so I’m gonna ask you a really tough question, but it’s more an attempt to make a general point about how messed up our dialogue about AI is rather than focusing on you, because I think you’re mapping out there of this ecosystem was actually a pretty interesting contribution and worthwhile. But you open your paper by saying, invoking the invention of the printing press, right? Now, can you use your imagination and try to project for me what would have happened if the authorities and the public in 1452 had decided they were going to regulate printing? And what do you think would have resulted from that?

Moderator:
Do we have any other questions in the room? Because otherwise we can go back to the panel. We can go in reverse order. We have like nine minutes left, so that would be like three minutes max. Sure, so to your question.

Kazim Rizvi:
So I think that’s a good point. So I think that’s a good point. So I think that’s a good point. So in this paper, we haven’t suggested that AI should be regulated, right? What we are saying is that, look, there are certain harms associated with the use of AI, which we have to be careful of. And we have to work towards developing some frameworks and principles around these harms, which we’ve identified. And I think that’s a good point. And I think that’s a good point. So, you know, across the globe, AI is regulated as it is, right? We’ve not taken a stand that, look, you have to sort of come up with very strong regulations to, you know, sort of really bucketed into different kind of, you know, technologies which should or should not be used. But maybe in the next 10, 15 years as the usage grows, we may see that, you know, we may see that, you know, we may see that AI should be regulated as it is. So I think it’s very clear that these are principles which will help in improving the effectiveness of the technology. I mean, the same argument goes for fire. The same argument we can apply for fire as well. So, you know, we may not have seen what we see today. But eventually it was. So the same argument applies to this, that an AI has been around for a few decades now. It’s not like the technology is very new of late. It’s been there for a while. But we are not suggesting that, look, put very strict or very hard regulations to begin with. What we’re suggesting is, look, move slowly, but watch out for harms as they take place. So, you know, I think it’s very clear that, you know, we may see that, you know, we may see that, you know, we may see that AI is not a solution. I mean, look at the industry, look at civil society. are a means to also put that discussion into context that look, we need to move towards more responsible deployment of AI. And what that means, even we don’t know. I mean, we are all studying this. A lot of scholars globally are trying to figure out what is really responsible AI, as much as responsible printing press or responsible use of fire would be.

Moderator:
Just quickly coming in on your… Yeah, very quickly coming in your points.

Kamesh Shekar:
There’s too many things to discuss in whatever you have said. We can take it offline too. On the very first thing on impact population and end users, so basically the paper does that, where it also says that as we as end users and impact population use such technologies, how should we responsibly use it? So there are certain principles and there are certain operationalization things that we talk there. So second thing that on your question on where is this, the life cycle comes from is derived from NIST and OECD and et cetera and stuff. In addition to that, we have added some aspects of our own. And we have also validated how we think that is important within the paper. Third thing about, if you could repeat your third point, or it was something on the exclusion and stuff.

Moderator:
Well, I don’t think we have time. I think we should take it to the break. We can go back to Jamie for like last points and then we can wrap it up, I think. Jamie, are you still there? Yes. Yes, I’m still here. Thank you. And thank you very much, Danielle, for your comments.

Jamie Stewart:
Just to, I will be very brief in these because I obviously don’t have a lot of chance to go over them in detail. And you brought up some really, really good points. Just to say we had a very- comprehensive list of threats that we asked about experiences at both the personal and the organizational level and we also had some open-ended information so people could add more. There is a lot going to be a lot more information about that in the report. You also asked about anecdotes in terms of actors. This is a very sensitive issue and I think you know in terms of diplomacy and what that looks like. We did ask about perpetrators and who they think the perpetrators are and obviously we have to be considered that that is perceptual as I mentioned right at the very beginning and there are a range of state and non-state actors and as I said most of them in terms of the stories were very coordinated and so that is when we’re experiencing but on social media and what those attacks look like obviously they’re sometimes very difficult to trace but we can definitely trace some of the surveillance software as it was used and that’s very relevant to the South Asian context. I did want to very briefly end on first off say that yes your recommendation your point about the recommendations being more concrete is very well taken and we’re working with the civil society right now in order to co-create those more but the last thing and what I wanted to end on was your very important point and I agree with this entirely the misuse of regulation and law and policy against human rights defenders or against journalists or advocates and those who speak out. I think this is an incredibly important point that is very nuanced and the ones that I really want to highlight and pay attention to are those ones that are considered to be anti-terrorism and cyber that the more generic cyber crime laws that really put those who are potentially speaking out in a place where they could be legislated against. And I think that’s something that we need to, we need to very strongly consider when we’re taking kind of more global and international regulatory frameworks, because they can do the opposite. And just having cybersecurity policy in place does not mean that it’s in place in a way that’s protective against human rights. So thank you for bringing that up and that’s well and thank you so much, everybody.

Moderator:
Thanks, Jamie. Thanks everybody who was on this panel and for participating. Also thanks to Nanette, even though she’s no longer here. Yes, I think we can go to the break. Everybody please give a hand to the panelists. And what time are we back? Like what time do we reconvene? We should come back at 12.35. 12.35, 1.35. 1.35. I hope. If we come back at 12.40, I think we can. 1.40. Yeah, yeah. It’s okay, the jet lag. It’s fine. Okay, we come back at 1.40 everybody. And that goes for people online as well. Okay.

Audience

Speech speed

179 words per minute

Speech length

256 words

Speech time

86 secs

Berna Akcali Gur

Speech speed

147 words per minute

Speech length

2264 words

Speech time

921 secs

Jamie Stewart

Speech speed

162 words per minute

Speech length

2664 words

Speech time

984 secs

Kamesh Shekar

Speech speed

180 words per minute

Speech length

1281 words

Speech time

426 secs

Kazim Rizvi

Speech speed

210 words per minute

Speech length

1607 words

Speech time

458 secs

Kimberley Anastasio

Speech speed

171 words per minute

Speech length

2039 words

Speech time

717 secs

Moderator

Speech speed

140 words per minute

Speech length

4886 words

Speech time

2101 secs

Nanette Levinson

Speech speed

141 words per minute

Speech length

1476 words

Speech time

629 secs

Vagisha Srivastava

Speech speed

171 words per minute

Speech length

2617 words

Speech time

916 secs

Yik Chan Chin

Speech speed

157 words per minute

Speech length

2398 words

Speech time

919 secs