A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Tara Denham

Canada is leading the way in taking AI governance seriously by integrating digital policy with human rights. The Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada is actively working on the geopolitics of artificial intelligence, ensuring that AI development and governance uphold human rights principles.

The Canadian government is actively involved in developing regulation, policy, and guiding principles for AI. They have implemented a directive on how government will handle automated decision making, including an algorithmic impact assessment tool. To ensure responsible development and management of AI, the government has published a voluntary Code of Conduct and is working on AI and Data Act legislation. Additionally, the government requires engagement with stakeholders before deploying generative AI, demonstrating their commitment to responsible AI implementation.

Stakeholder engagement is considered essential in AI policy making, and Canada has taken deliberate steps to involve stakeholders from the start. They have established a national table that brings together representatives from the private sector, civil society organizations, federal, provincial, and territorial governments, as well as Indigenous communities to consult on AI policies. This inclusive approach recognizes the importance of diverse opinions and aims to develop policies that are representative of various perspectives. However, it is acknowledged that stakeholder engagement can be time-consuming and may lead to tensions due to differing views.

Canada recognizes the significance of leveraging existing international structures for global AI governance. They have used the Freedom Online Coalition to shape their negotiating positions on UNESCO Recommendations on AI ethics. Additionally, they are actively participating in Council of Europe negotiations on AI and human rights. However, it is noted that more countries and stakeholder groups should be encouraged to participate in these international negotiations to ensure a comprehensive and inclusive global governance framework for AI.

There is also a need for global analysis on what approaches to AI governance are working and not working. This analysis aims to build global capacity and better understand the risks and impacts of AI in different communities and countries. Advocates emphasize the importance of leveraging existing research on AI capacity building and research, supported by organizations like the International Development Research Centre (IDRC).

Furthermore, there is a strong call for increased support for research into AI and its impacts. IDRC in Canada plays a pivotal role in funding and supporting AI capacity-building initiatives and research. This support is crucial in advancing our understanding of AI’s potential and ensuring responsible and beneficial implementation.

In conclusion, Canada is taking significant steps towards effective AI governance by integrating digital policy with human rights, developing regulations and policies, and engaging stakeholders in decision-making processes. By leveraging existing international structures and conducting global analysis, Canada aims to contribute to a comprehensive and inclusive global AI governance framework. Additionally, their support for research and capacity-building initiatives highlights their commitment to responsible AI development.

Marlena Wisniak

The analysis highlights several important points regarding AI governance. One of the main points is the need for mandatory human rights due diligence and impact assessments in AI governance. The analysis suggests that implementing these measures globally presents an opportunity to ensure that AI development and deployment do not infringe upon human rights. This approach is informed by the UN Guiding Principles for Business and Human Rights, which provide a framework for businesses to respect human rights throughout their operations. By incorporating human rights impact assessments into AI governance, potential adverse consequences on human rights can be identified and addressed proactively.

Another key point raised in the analysis is the importance of stakeholder engagement in AI governance. Stakeholder engagement is viewed as a collaborative process in which diverse stakeholders, including civil society organizations and affected communities, can meaningfully contribute to decision-making processes. The inclusion of external stakeholders is seen as crucial to ensure that AI governance reflects the concerns and perspectives of those who may be affected by AI systems. By involving a range of stakeholders, AI governance can be more comprehensive, responsive, and representative.

Transparency is regarded as a prerequisite for AI accountability. The analysis argues that AI governance should mandate that AI developers and deployers provide transparent reporting on various aspects, such as datasets, performance metrics, human review processes, and access to remedy. This transparency is seen as essential to enable meaningful scrutiny and assessment of AI systems, ensuring that they function in a responsible and accountable manner.

Access to remedy is also highlighted as a crucial aspect of AI governance. This includes the provision of internal grievance mechanisms within tech companies and AI developers, as well as state-level and judicial mechanisms. The analysis argues that access to remedy is fundamental for individuals who may experience harm or violations of their rights due to AI systems. By ensuring avenues for redress, AI governance can provide recourse for those affected and hold accountable those responsible for any harm caused.

The analysis also cautions against over-broad exemptions for national security or counter-terrorism purposes in AI governance. It argues that such exemptions, if not carefully crafted, have the potential to restrict civil liberties. To mitigate this risk, any exemptions should have a narrow scope, include sunset clauses, and prioritize proportionality to ensure that they do not unduly infringe upon individuals’ rights or freedoms.

Furthermore, the analysis uncovers a potential shortcoming in AI governance efforts. It suggests that while finance, business, and national security are often prioritized, human rights are not given sufficient consideration. The analysis calls for a greater focus on human rights within AI governance initiatives, ensuring that AI systems are developed and deployed in a manner that respects and upholds human rights.

The analysis also supports the ban of AI systems that are fundamentally incompatible with human rights, such as biometric surveillance in public spaces. This viewpoint is based on concerns about mass surveillance and discriminatory targeted surveillance enabled by facial recognition and remote biometric recognition technologies. Banning such technologies is seen as necessary to safeguard privacy, freedom, and prevent potential violations of human rights.

In addition to these key points, the analysis reveals a couple of noteworthy observations. One observation is the importance of multistakeholder participation and the need to engage stakeholders in the process of policymaking. This is seen as a means to balance power dynamics and address the potential imbalance between stakeholders, particularly as companies often possess financial advantages and greater access to policymakers. The analysis highlights the need for greater representation and involvement of human rights advocates in AI governance processes.

Another observation relates to the capacity and resources of civil society, especially in marginalized groups and global majority-based organizations. The analysis urges international organizations and policymakers to consider the challenges faced by civil society in terms of capacity building, resources, and finance. It emphasizes the need for more equitable and inclusive participation of all stakeholders to ensure that AI governance processes are not dominated by powerful actors or leave marginalized groups behind.

Finally, the analysis suggests that laws in countries like Canada can have a significant influence on global regulations, especially in countries with repressive regimes or authoritarian practices. This observation draws attention to the concept of the “Brussels effect,” wherein EU regulations become influential worldwide. It highlights the potential for countries with stronger regulatory frameworks to shape AI governance practices globally, emphasizing the importance of considering the implications and potential impacts of regulations beyond national borders.

In conclusion, the analysis underscores the importance of incorporating mandatory human rights due diligence, stakeholder engagement, transparency, access to remedy, and careful consideration of exemptions in AI governance. It calls for greater attention to human rights within AI governance efforts, the banning of AI systems incompatible with human rights, and the inclusion of diverse perspectives and voices in decision-making processes. The analysis also raises attention to the challenges faced by civil society and the potential influence of laws in one country on global regulations. Overall, it provides valuable insights for the development of effective and responsible AI governance frameworks.

Speaker

Latin America faces challenges in meaningful participation in shaping responsible AI governance. These challenges are influenced by the region’s history of authoritarianism, which has left its democracies weak. Moreover, there is a general mistrust towards participation, further hindering Latin America’s engagement in AI governance.

One of the main obstacles is the tech industry’s aggressive push for AI deployment. While there is great enthusiasm for AI technology, there is a lack of comprehensive understanding of its limitations, myths, and potential risks. Additionally, the overwhelming number of proposals and AI guidance make it difficult for Latin America to keep up and actively contribute to the development of responsible AI governance.

Despite these challenges, Latin America plays a crucial role in the global chain of AI technological developments. The region is a supplier of vital minerals like lithium, which are essential for manufacturing AI systems. However, the mining processes involved in extracting these minerals often have negative environmental impacts, including pollution and habitat destruction. This has led to mixed sentiments regarding Latin America’s involvement in AI development.

Latin America also provides significant resources, data, and labor for AI development. The region supplies the raw materials needed for hardware manufacturing and offers diverse datasets collected from various sources for training AI models. Additionally, Latin America’s workforce contributes to tasks such as data labeling for machine learning purposes. However, these contributions come at a cost, with negative impacts including environmental consequences and labor exploitation.

It is crucial for AI governance to prioritize the impacts of AI development on human rights. Extracting material resources for AI development has wide-ranging effects, including environmental degradation and loss of biodiversity. Moreover, the health and working conditions of miners are often disregarded, and there is a lack of attention to data protection and privacy rights. Incorporating human rights perspectives into AI governance is necessary.

Another concerning issue is the use of AI for surveillance purposes and welfare decisions by governments, without adequate transparency and participation standards. The deployment of these technologies without transparency raises concerns about citizen rights and privacy.

To address these challenges, it is necessary to strengthen democratic institutions and reduce asymmetries among regions. While Latin America provides resources and labor for AI systems designed elsewhere, AI governance processes often remain distant from the region. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and participation are essential.

In conclusion, Latin America faces obstacles in meaningful participation in shaping responsible AI governance due to the aggressive push for AI deployment and its history of authoritarianism. However, the region plays a crucial role in the global AI technological chain by providing resources, data, and labor. It is important to consider the impacts of AI development on human rights and promote transparency and participation in AI governance. Strengthening democratic institutions and addressing regional asymmetries are necessary for a more inclusive and equitable AI governance process.

Ian Barber

The analysis conducted on AI governance, human rights, and global implications reveals several key insights. The first point highlighted is the significant role that the international human rights framework can play in ensuring responsible AI governance. Human rights are deeply rooted in various sources, including conventions and customary international law. Given that AI is now able to influence many aspects of life, from job prospects to legal verdicts, it becomes essential to leverage the international human rights framework to establish guidelines and safeguards for AI governance.

Another important aspect is the ongoing efforts at various international platforms to develop binding treaties and recommendations on AI ethics. The Council of Europe, the European Union, and UNESCO are actively involved in this process. For instance, the Council of Europe is working towards the development of a binding treaty on AI, while the European Union has initiated the EU AI Act, and UNESCO has put forth recommendations on the ethics of AI. These efforts are crucial to prevent the exacerbation of inequality and the marginalization of vulnerable groups.

Stakeholder engagement is identified as a vital component of responsible AI governance. The path towards effective governance cannot be traversed alone, and it is crucial to ensure meaningful engagement from relevant stakeholders. These stakeholders include voices from civil society, private companies, and international organizations. Their input, perspectives, and expertise can contribute to the development of comprehensive AI governance policies that consider the diverse needs and concerns of different stakeholders.

One noteworthy observation made during the analysis is the importance of amplifying the voices of the global majority. Historically, many regions across the world have been left out of global dialogues and efforts at global governance. It is crucial to address this imbalance and include voices from diverse backgrounds and regions in discussions on AI governance. A workshop has been suggested as a call to action to begin the ongoing collective effort in addressing the complexities brought about by AI.

The analysis also emphasizes the need to consider regional perspectives and involvement in global AI development. Regions’ developments are essential factors to be taken into account when formulating AI policies and strategies. This ensures that the implications and impact of AI are effectively addressed on a regional level.

Furthermore, the analysis highlights the significance of African voices in the field of responsible AI governance and the promotion of human rights. Advocating for strategies or policies on emerging technologies specifically tailored for African countries can contribute to better outcomes and equitable development in the region.

Another noteworthy point is the need to bridge the gaps in discourse between human rights and AI governance. The analysis identifies gaps in understanding how human rights principles can be effectively integrated into AI governance practices. Addressing these gaps is essential to ensure that AI development and deployment are in line with human rights standards and principles.

In conclusion, the analysis underscores several important considerations for AI governance. Leveraging the international human rights framework, developing binding treaties and recommendations on ethics, fostering stakeholder engagement, considering global majority voices, including regional perspectives, and amplifying African voices are all critical aspects of responsible AI governance. Additionally, efforts should be made to bridge the gaps in discourse between human rights and AI governance. By integrating human rights principles and adhering to the international rights framework, AI governance can be ethically sound and socially beneficial.

Shahla Naimi

The analysis explores the impact of AI from three distinct viewpoints. The first argument suggests that AI has the potential to advance human rights and create global opportunities. It is argued that AI can provide valuable information to human rights defenders, enabling them to gather comprehensive data and evidence to support their causes. Additionally, AI can improve safety measures by alerting individuals to potential natural disasters like floods and fires, ultimately minimizing harm. Moreover, AI can enhance access to healthcare, particularly in underserved areas, by facilitating remote consultations and diagnoses. An example is provided of AI models being developed to support the 1000 most widely spoken languages, fostering better communication across cultures and communities.

The second viewpoint revolves around Google’s commitment to embedding human rights into its AI governance processes. It is highlighted that the company considers the principles outlined in the Universal Declaration of Human Rights when developing AI products. Google also conducts human rights due diligence to ensure their technologies respect and do not infringe upon human rights. This commitment is exemplified by the company-wide stance on facial recognition, which addresses ethical concerns surrounding the technology.

The third perspective emphasizes the need for multi-stakeholder and internationally coordinated AI regulation. It is argued that effective regulation should consider factors such as the structure, scope, subjects, and standards of AI. Without international coordination, fragmented regulations with inconsistencies may arise. Involving multiple stakeholders in the regulatory process is vital to consider diverse perspectives and interests.

Overall, the analysis highlights AI’s potential to advance human rights and create opportunities, particularly in information gathering, safety, and healthcare. It underscores the importance of embedding human rights principles into AI governance processes, as demonstrated by Google’s commitments. Furthermore, multi-stakeholder and internationally coordinated AI regulation is crucial to ensure consistency and standards. These viewpoints provide valuable insights into the ethical and responsible development and implementation of AI.

Pratek Sibal

A recent survey conducted across 100 countries revealed a concerning lack of awareness among judicial systems worldwide regarding artificial intelligence (AI). This lack of awareness poses a significant obstacle to the effective implementation of AI in judicial processes. Efforts are being made to increase awareness and understanding of AI in the legal field, including the launch of a Massive Open Online Course (MOOC) on AI and the Rule of Law in seven different languages. This course aims to educate judicial operators about AI and its implications for the rule of law.

Existing human rights laws in Brazil, the UK, and Italy have successfully addressed cases of AI misuse, suggesting that international human rights law can be implemented through judicial decisions without waiting for a specific AI regulatory framework. By proactively applying existing legal frameworks, countries can address and mitigate potential AI-related human rights violations.

In terms of capacity building, it is argued that institutional capacity building is more sustainable in the long term compared to individual capacity building. Efforts are underway to develop a comprehensive global toolkit on AI and the rule of law, which will be piloted with prominent judicial institutions such as the Inter-American Court of Human Rights and the East Africa Court of Justice. This toolkit aims to enhance institutional capacity to effectively navigate the legal implications of AI.

Community involvement is crucial, and efforts have been made to make content available in multiple languages to ensure inclusivity and accessibility. This includes the development of a comic strip available in various languages and a micro-learning course on defending human rights in the age of AI provided in 25 different languages.

Canada’s AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact. These projects have supported the growth of communities in creating language datasets and developing applications in healthcare and agriculture, thereby increasing the capacity of civil society organizations in these regions.

The evolution of international standards and policy-making has seen a shift from a traditional model of technical assistance to a more collaborative, multi-stakeholder approach. This change involves engaging stakeholders at various levels in the development of global policy frameworks, ensuring better ownership and effectiveness in addressing AI-related challenges.

Pratek Sibal, a proponent of the multi-stakeholder approach, emphasizes the need for meaningful implementation throughout the policy cycle. Guidance on developing AI policies in a multi-stakeholder manner has been provided, covering all phases from agenda setting to drafting to implementation and monitoring.

Dealing with authoritarian regimes and establishing frameworks for AI present complex challenges with no easy answers. Pratek Sibal acknowledges the intricacies of this issue and highlights the need for careful consideration and analysis in finding suitable approaches.

In conclusion, the survey reveals a concerning lack of awareness among judicial systems regarding AI, hindering its implementation. However, existing human rights laws are successfully addressing AI-related challenges in several countries. Efforts are underway to enhance institutional capacity and involve communities in strengthening human rights in the age of AI. The positive impact of Canada’s AI for Development projects and the shift towards a collaborative, multi-stakeholder approach in international standards and policy-making are notable developments. Dealing with authoritarian regimes in the context of AI requires careful consideration and exploration of suitable frameworks.

Audience

Different governments and countries are adopting varied approaches to AI governance. The transition from policy to practice in this area will require a substantial amount of time. However, there is recognition and appreciation for the ongoing multi-stakeholder approach, which involves including various stakeholders such as governments, industry experts, and civil society.

It is crucial to analyze and assess the effectiveness of these different approaches to AI governance to determine the most successful strategies. This analysis will inform future decisions and policies related to AI governance and ensure their efficacy in addressing the challenges posed by AI technologies.

UNICEF has played a proactive role in the field of AI for children by creating policy guidance on the topic. Importantly, they have also involved children in the process. This approach of engaging children in policy creation has proven to be valuable, as their perspectives and experiences have enriched the final product. Inclusion and engagement of children in policy creation and practices around AI are viewed as both meaningful and necessary.

Furthermore, efforts are being made to ensure responsible AI in authoritarian regimes. Particularly, there is ongoing work on engaging Technical Advisory Groups (TAG) for internet freedoms in countries such as Myanmar, Vietnam, and China. This work aims to promote responsible AI practices and address any potential human rights violations that may arise from the use of AI technologies.

Implementing mechanisms to monitor responsible AI in authoritarian regimes is of utmost importance. These mechanisms can help ensure that AI technologies are used in ways that adhere to principles of human rights and minimize potential harms.

Interestingly, it is noted that implementing policies to monitor responsible AI is relatively easier in human rights-friendly countries compared to authoritarian ones. This observation underscores the challenges faced in authoritarian regimes where governments may exert greater control over AI technologies and policies.

In conclusion, the various approaches to AI governance taken by governments and countries need careful analysis to determine their effectiveness. Engaging children in policy creation and promoting responsible AI in authoritarian regimes are fundamental steps in fostering a safe and inclusive AI ecosystem. Implementing mechanisms to monitor responsible AI poses a particular challenge in authoritarian contexts. However, policies for monitoring responsible AI are relatively easier to implement in human rights-friendly countries. These insights highlight the ongoing efforts required to develop effective AI governance frameworks that protect human rights and promote responsible AI use.

Oluseyi Oyebisi

The analysis highlights the importance of including the African region in discussions on AI governance. It notes that the African region is coming late to the party in terms of participating in AI governance discussions and needs to be included to ensure its interests are represented. The argument presented is that African governments, civil society, and businesses should invest in research and engage more actively in global conversations regarding AI governance.

One of the main points raised is the need for Africa to build technical competence to effectively participate in international AI negotiations. It is mentioned that African missions abroad must have the right capacity to take part in these negotiations. Furthermore, it is noted that universities in Africa are not yet prepared for AI development and need to strengthen their capabilities in this area.

Additionally, the analysis suggests that African governments should consider starting with soft laws and working with technology platforms before transitioning to hard laws. It is argued that this approach would allow them to learn from working with technology platforms and progress towards more rigid regulations. The need for regulation that balances the needs of citizens is emphasized.

The analysis also highlights the need for African governments, civil society, and businesses to invest in research and actively engage in global platforms related to AI governance. It is mentioned that investment should be made in the right set of meetings, research, and engagements. Bringing Africans into global platforms is seen as a crucial step towards ensuring their perspectives and needs are considered in AI governance discussions.

Overall, the expanded summary emphasizes the need to incorporate the African region into the global AI governance discourse. It suggests that by building technical competence, starting with soft laws, and actively engaging in research and global platforms, African countries can effectively contribute to AI governance and address their specific development challenges.

Session transcript

Ian Barber:
Hope everyone’s doing well. Thank you so much for joining this session. One of the many this week on AI and AI governance, but with a more focused view and perspective on global human rights approach to AI governance. My name is Ian Barber. I’m legal lead at Global Partners Digital. We’re a civil society organization based in London working to foster an online environment underpinned by human rights. We’ve been working on AI governance and human rights for several years now. So I’m very happy to be a co-organizing facilitating this alongside Transparencia Brazil, who is our online moderator. So thank you very much. What I’ll be doing over the next few minutes is providing a bit of introduction to this workshop, setting the scene, introducing our fantastic speakers, both in person and online, and providing a bit of structure as well for the discussion that we’re having today and some housekeeping rules. Really, this workshop is meant to acknowledge that we stand at the intersection of two realities, the increasing potential of artificial intelligence on one hand and the ongoing relevance of the international human rights framework on the other. When we think of a human rights-based approach to AI governance, a few things come to mind. Firmly and truly grounding policy approaches in the international human rights framework, the ability to assess risks to human rights, promoting open and inclusive design and deployment of AI, as well as ensuring transparency and accountability amongst other elements and measures. And given this, it’s probably not news to anyone in the room that the rapid design, development, and deployment of AI demands our attention, our understanding, and our collaborative efforts across various different stakeholders. Human rights, which are enshrined in various sources, such as conventions. and customer international law, and its dynamic interpretations and evolution, it works to guide us towards our world continually where people can exercise and enjoy their human rights to thrive without prejudice or discrimination or other forms of injustice. And like any technology, AI poses both benefits and risks to enjoyments of human rights. I’m sure you’ve attended other sessions this week where you spoke in a bit more detail about what those look like in various sectors and on different civil, political, economic and social rights. But today, what we’re gonna be doing is narrowing in on a few key questions. The first is how can the international human rights framework be leveraged to ensure responsible AI governance in a rapidly changing context and world that we live in? And I think this question is important because it underscores how AI is now able to influence so many things from our job prospects, our ability to express ourselves, legal verdicts. And so how do we ensure that human rights continue to be respected, protected and promoted is key. Secondly, we must reflect upon the global implications for human rights in the kind of ongoing proliferation of AI governance frameworks that we’re seeing today. And also, and in the potential absence of effective frameworks, what is the result and what are we looking at? There has been this ongoing proliferation of efforts at the global, regional, national level to provide frameworks, rules and other types of normative structures and standards that are supposed to promote and safeguard human rights. For example, just to highlight a few, there’s ongoing efforts at the Council of Europe to develop a binding treaty on AI. There’s the European Union’s efforts with the EU AI Act. There’s UNESCO’s recommendations on the ethics of AI, which is finalized but currently undergoing implementation. And other efforts such as the more recently proposed. UN high level advisory body on AI. But at this point, we’ve yet to see comprehensive and binding frameworks enacted at this point, which might be considered, you know, effective and sufficient to protect human rights. And without these safeguards and protections, we therefore risk kind of exacerbating inequality, silencing marginalized groups and voices and inadvertently creating a world where AI serves more as a divider than it does promoter and for equality. So what do we wanna see and what do we want to do to ensure that this is not the case and not the future that we’re looking at? And lastly, over the next 80 or so minutes, the path towards responsible AI governance is not one that can be kind of traversed alone. So we need to navigate these challenges together, fostering meaningful engagement by all relevant stakeholders. That’s why on this panel, we have voices from civil society, from private companies, from international organizations, which are all needed. And we also need to particularly amplify voices from the global majority. Historically, many regions across the world have been left out of global dialogues and efforts at global governance. And that’s very much the case when it comes to AI as well. So this workshop is, it’s not just a gathering I see, it is one, it’s one for information sharing, but it’s also a call to action. It’s really, I think, the beginning of an ongoing collective effort to address a range of complexities that have come about from AI and to really work to ensure the ongoing relevance of our shared human values and for human rights. So with that intro and framing, I’d like to get started, get the ball rolling and kind of drawing from the diverse range of experiences here, really talk about what we want in terms of a global human rights approach to responsible AI governance. And to do that, we have an all-star lineup of speakers from, again, a number of different. stakeholders. I’m going to briefly introduce them, but I encourage you to all, when you make your interventions, to provide a bit more background on where you come from, the type of work you do, and really why you’re here today and your motivations. And in no particular order, we have Marlena Wisniak from the European Center for Nonprofit Law, to my left. We have Vladimir Jure from Direcho Societales, who’s over there. We also have Tara Denham from Global Affairs Canada, and we have Pratik as well from UNESCO. So thank you for all being here in person. And online, we have Sholana Mae from Google, and Oyabisi Olesi from the Nigeria Network of NGOs, or NNNGO. In terms of structure, we have a bit of time on our hands. And what we’re going to do then is divide the session into two parts. The first part is going to be looking at a particular focus on the international human rights framework, and also this ongoing proliferation of regulatory processes on AI that I’ve kind of alluded to already. We’ll then take a pause for questions from the audience, as well as those joining online as well. And I want to give a special shout out to Marina from Transparencia Brazil, who is taking in questions and feeding them into me so that we can have a hybrid conversation. And then after this first part, we’ll stop and we’ll have a second part, and that’ll look a bit more at inclusion of voices in these processes, and how engagement from the global majority is imperative. And that will be followed by a final brief Q&A session, and then closing remarks. So I hope that makes sense. I hope that sounds structured enough and productive, and I look forward to your questions and interventions later. But let’s get into the meat of things. Looking at the international human rights framework, we’re at a point where there are various efforts on global AI governance happening at a breakneck speed. And there’s a number of them that I’ve mentioned, including the hearing. Rishima process that was just spoken about yesterday, if you guys read the main event. So my first question and kind of my prompt is to my left Marlena, really given your work at ECNL and kind of the ongoing efforts you have to advocate for rights respecting approaches on these types of AI regulatory processes, what do you consider or think is missing in terms of aligning them with the International Human Rights Framework? And again, if you could provide a brief background and introduction, that’d be great, thanks.

Marlena Wisniak:
Sure, thanks so much Ian and hi everyone. Welcome to day two, I think it is of IGF. It feels like a week already. So my organization, the European Center for Nonprofit Law is a human rights org that focuses on civic space, freedom of assembly and association. And also we work a lot on freedom of expression and privacy. And over the past five years, we’ve noticed that AI was a big risk and some extent opportunity, but great potential for harm as well for activists, journalists and human rights defenders around the world. So the first five years of our work in this space were rather quiet, or I’d say it was more of a niche area with only a handful of folks working at the intersection of human rights and AI. And by handful, I really mean like 10 to 15. And this year, the discussion around AI has really expanded very, very quickly and it may be a chat GPT kind of trailblazer issue, but it’s great to see that at the UN there is interest for this topic and panels like this that bring a human rights based approach to AI. So Ian mentioned a couple of the ongoing regulations. I won’t bore you this morning with a lot of legalese, but the core frameworks that we focus on advocate for a human rights based approach at ECNL are obviously the EU AI Act and trilogues are happening as I speak right now. Council of Europe Convention on AI. national laws as well, we’ve seen these expand a lot around the world recently. We engage in standardization bodies, so like the US NIST, a National Institute for Standards and Technology, and the EU CENCENELEC, and of course, international organizations like OECD and the UN, and you mentioned, Ian, Hiroshima process, that’s one we’re following closely as well. In the coming years, as the AI Act is said to be accepted in the next couple of weeks, and definitely by early 2024, we’ll be following the implementation of the Act, and so I’ll use this as a segue to talk to you a little bit about what are the core elements that we see should be part of any AI framework and AI governance from a human rights-based approach, and that begins with human rights to diligence and meaningful human rights impact assessments in line with the UN Guiding Principles for Business and Human Rights. So we really see, with AI, an opportunity to implement mandatory human rights to diligence, including human rights impact assessments in the EU space that also involves other laws, but beyond EU, globally, the UN and other institutions, and FORA have an opportunity right now to actually mandate meaningful, inclusive, and rights-based impact assessments. That means meaningfully engaging stakeholders as well, especially external stakeholders like civil society organizations and affected communities around the world. So stakeholder engagement is a necessary and cross-cutting component of AI governance, development, and use, and at ECNL, we look both at how to govern AI and then how it’s developed and how it’s deployed around the world. We understand stakeholder engagement is a collaborative process where diverse stakeholders, both internal and external, meaning those that that develop the technologies themselves can meaningfully influence decision making. So on the governance side of things is when we consult in these processes, including a multi-stakeholder forum like IGF, do our voices actually heard? Can they impact the final text and provisions of any laws or policies that are implemented? And on the AI design and development side of things, when tech companies or any deployer of AI consults of external stakeholders, do they actually implement, do they include their voices and do these voices inform and shape final decision making? In the context of human rights impact assessments of AI systems, stakeholder engagement is particularly effective to understand what kind of AI systems are even helpful or useful and how do they work. So looking at the product and service side of AI, machine learning or any algorithmic-based data analytics systems, we really can, we can shape better regulation and develop better systems by including these stakeholders. Importantly, external stakeholders can identify specific potential positive or adverse impacts on human rights, such as the implications, benefits and harms of these systems on people and looking at marginalized and already vulnerable groups in particular. If you’re interested to learn more about stakeholder engagement, check out our framework for meaningful engagement. So shameless plug to Google or go on our website and look up framework for meaningful engagement where we provide concrete recommendations for engaging internal and external stakeholders in AI systems. And these recommendations can also be used for AI governance as a whole. Moving on, I’d like to touch base, touch on transparency briefly, which in addition to human rights impact assessments and stakeholder engagement we see as a prerequisite for AI accountability and a rights-based global AI governance. So, not to go too much to detail, but we believe that AI governance should mandate that AI developers and deployers report on data sets, including training data sets, performance and accuracy metrics, false positives and false negatives, human in the loop and human review, and access to remedy. If you’d like to learn more about that, I urge you to look at our recent paper, published with Access Now just a couple weeks ago, on the EU Digital Services Act, with a spotlight on algorithmic systems, and we outline our vision for what meaningful transparency would look like. Finally, access to remedy is a key part of any governance mechanism that includes both internal agreements mechanisms within tech companies and AI developers, as well as, obviously, state remedy at the state level and judicial mechanisms, which are, as a reminder, states have the primary responsibility to protect human rights and give remedy when these are harmed. And one, I’d say, aspect that we often see in AI governance efforts, especially by governments, to include an exemption for national security or counter-terrorism and, broadly, emergency measures. And at ECNL, we caution against over-broad exemptions that are too vague, broadly defined, as these can be, at best, misused, as worst, weaponized to restrict non-civil liberties. So, if there are any exemptions for things like national security or counter-terrorism in AI governance, we really urge to have a narrow scope, include sunset clauses for emergency measures, meaning that if any exemptions are in place, they will end within due time, and focus on proportionality. And finally, what is missing? So what we see today, both in the EU and globally as well, is that AI governance efforts mostly take a risk-based approach. And the risk part is often to finance, business, I mentioned national security, terrorism, these kind of things, but rarely human rights. And the AI act itself in the EU is regulated under product liability and market approach, not fundamental rights. In our research paper of 2021, we outlined key criteria for evaluating the risk level of AI systems from a human rights-based approach. And that means that we recommend determining the level of risk based on the product design, the severity of the impact, any internal due diligence mechanisms, causal link between the AI system and adverse human rights impacts, and potential for remedy. And all these examples help us really focus on the harms of AI to human rights. Last thing, and then I’ll stop here, where AI systems are fundamentally incompatible with human rights, such as biometric surveillance deployed in public spaces, including facial and emotional recognition, we, along with a coalition of civil society organizations, advocate for a ban of such systems. And we’ve seen proliferation of laws, like in the US, for example, the state level, and right now, in the latest version of the AI Act adopted by the European Parliament of such bans. So that means prohibiting the use of facial recognition and remote biometric recognition technologies that enable mass surveillance and discriminatory targeted surveillance in public and publicly accessible spaces by the government. And we urge the UN and other processes, such as the Hiroshima, to include such bans. Thank you, Ian.

Ian Barber:
Thank you, Marlena. That was amazing. I think you actually just followed up to my. my immediate question, which was what is really needed when it comes to AI systems that do pose an unacceptable risk to human rights? So thank you for preemptively responding. And I very much agree that having mandatory due diligence, including impact assessments of human rights, is imperative. I think what you spoke to in terms of stakeholder engagement rings true, as well as the issue of transparency and the needs for that to foster meaningful accountability and also introducing remedies. So thank you very much for that overview. I think based on that, considering that there are these initiatives and there are so many different elements to consider, whether it’s transparency, accountability, or scope, I’ll turn to you, Tara, and ask, given all this, how is a government such as Canada approaching AI governance and considering human rights? What are, in terms of both your domestic priorities, in terms of kind of regional or international engagement? So if you could speak a bit to how these are all feeding together, that’d be great. Thank you.

Tara Denham:
Sure. Thank you. And thank you for inviting me to participate on the panel. So as I said, I’m Director General of the Office of Human Rights, Freedoms, and Inclusion at Global Affairs Canada, which I think also warrants perhaps a bit of an explanation, but I think actually aligns really well as a starting position. Because within the Office of Human Rights, Freedoms, and Inclusion is actually where we’ve been embedded the responsibility for digital policy and cybersecurity policy from a global affairs perspective. And so that was our starting point for which, since the integration of those policy positions and that policy work a number of years ago, it was always starting from a human rights perspective. And so this goes back, I think, about six or seven years that we actually created this office and integrated the human rights perspective into our digital policy from the beginning and some of our initial positions on the development of AI considerations and the geopolitics of artificial intelligence. So I think that, in and of itself, is perhaps unique in some of the. structures. Having said that, I would also acknowledge that a lot of the government structures, we are all trying to figure out how to approach this, but as the DG responsible for these, it does give a great opportunity to, from the beginning, integrate that human rights policy position. When we were first starting to frame some of our AI thinking from a foreign policy lens, it was always from the human rights perspective. I can’t say that that has always meant we’ve known how to do it, but I could say that’s always been pushing us to think and challenge ourselves of how can we use the existing human rights frameworks, how can we advocate that at every venture, including domestically. I wanted to give, perhaps, a snapshot of the framing of how we’re approaching it in Canada, some of our national perspectives, and then how we’re linking that to the international, and of course, integrating how we address some of the integrating a diversity of voices into that in a concrete way. I would say when we started talking about this a number of years ago, it was the debate, and I’m sure many of you participated in this debate, it was a lot around should it be legislation first, should it be guiding principles, are there frameworks, are we going to do voluntary. For a number of years, that was the cycle we were in, and I would say over the last year and a half to two years, that’s not a debate anymore. We have to do all of them, and they’re going to be going at the same time. Right now, I think where I’m standing is it’s more about how are we going to integrate and how are we going to feed off of each other as we’re moving domestic at the same time as the international. We have to, typically, from a policy perspective, you would have your national positions defined and those would inform your international positions. Right now, the world is just moving at an incredible pace, so we’re doing it at the same time, and we have to find those… intersections but also takes a conscious decision across government, and when I say across government, I mean across our national government. And of course, this is within the framework, which we’re all very familiar with, which is domestically, we are also all aiming to harness AI to the greatest capacities, because of all of the benefits that there are, but we’re always very aware of the risks. And so that is a very real tension that we need to always be integrating into the policy discussions that we’re having. And our belief and our position in our national policy development and international is that is where the diversity of voices are absolutely required, because the risk views will be very different depending on the voice and the community that you’re inviting and that you’re actually engaging in the conversation in a meaningful way. So it’s not just inviting to the conversation, it’s actually listening and then shaping your policy position. So in Canada, what we’ve seen is, and I’m not going to go into great detail, but just to give you a snapshot of where we’ve started is like within the last four years, we’ve had a directive on how automated decision making will be handled by the government of Canada, and that was accompanied by an algorithmic impact assessment tool. That was sort of the first wave of direction that we gave in terms of how the government of Canada was going to engage with automated decision making. Then over the last little while, again, in the last year, there’s been a real push related to generative AI. So now in, I think it was just in the last couple months, there was the release of a guide on how to use generative AI within the public sector. A key point I wanted to note here is that it is a requirement to engage stakeholders before deploying generative AI by the government of Canada. Before we’re actually going to roll it out, we have to engage with those that will actually be impacted. be enacted, whether it be for public use or service delivery. And then just last month, a voluntary Code of Conduct on Responsible Development and Management of Advanced Generative AI Systems. This, again, we’ve seen the U.S. with similar announcements. We’ve seen the G7, work that we’re doing in the G7. And a lot of these codes of conduct and principles coming out at the same time, and this is also accompanied in Canada by working through legislation, so that we also have an AI and Data Act going through legislation. So, as I said, these are the basis of the regulations and the policy world that we’re working in within Canada. And what I comment there is that these are all then developed by multiple departments. Okay, so that’s where I think we’re challenging ourselves as policymakers, because we have to also increase our capability to work across the sectors, across the departments. And I would say from where we started with when we were developing Canada’s Directive on Automated Decision Making, through to the actual Code of Conduct that was just announced, that was moving from, you know, informal consultations across the country, trying to engage with private sector and academia, to the voluntary code being consulted. So, we have a national table set up now, which does include private sector, civil society, federal, provincial, territorial governments, Indigenous communities. So, we’ve also had to make a journey through what it means from sort of ad hoc consultation to formalized consultation when we’re actually developing these codes. So then, how does that translate internationally? As we’re learning domestically at a rapid pace, perhaps I can just pull on a few examples of how we’ve then tried to reflect that internationally. And I’m going to harken back to the UNESCO. UNESCO Recommendations on the Ethics of AI from 2021. So, this is where, again, it was making that conscious decision about harnessing our national tables that were in place to define our negotiating positions when we would be going internationally, given that, again, our national positions weren’t as defined. And then we also wanted to leverage the existing international structures, and I think that’s really important as we talk about the plethora of international structures at play. So, this is where we’ve used the Freedom Online Coalition. So, you have to look at the structures that you have, the opportunities that exist, and what are the means by which we can do wide consultation on the negotiating positions that we’re taking. So, for the UNESCO Recommendations, that’s where we use the Freedom Online Coalition, and they have the advisory network, which also includes civil society and tech companies. So, again, it’s about proactively seeking those opportunities, shaping your negotiating positions in a conscious way, and then bringing those to the table. We’re also involved in the Council of Europe negotiations on AI and human rights, which is, again, leveraging our tables, but it’s also advocating to have a more diverse representation of countries at the table. So, you have to seize the opportunity. We do see this as an opportunity to engage effectively in this negotiation, and we want to continue to advocate that more countries are participating, and that more stakeholder groups can engage. So, maybe I’ll just finish by saying some of the lessons that we’ve learned from doing this. It’s really easy to recite that and make it sound like it was, you know, easy to do. It’s not. Some of the lessons I would pull out, number one, stakeholder engagement requires a deliberate decision to integrate from the start. And I guess the most important word in that is That one is deliberate. You have to think about it from the beginning, you have to put that in place. As I’ve said a few times, you have to think about and make sure that you’re creating that space for the voices to be heard, and then actually following through on that. The second one, it does take time, it’s complex, and there will be tensions, and there should be tensions, because if there’s not tensions in the perspectives, then you probably haven’t created a wide enough table of a diversity of voices. So you have to, I think my team is probably tired of me saying this, but you have to get comfortable with living in a zone of discomfort. If you’re not in a zone of discomfort, you’re probably not pushing your policy, your own, your view, and again, I’m coming from a policy perspective, and you have to do that to find the best solutions. As policymakers, it is going to also drive us to sort of increase our expertise. So we’re seeing a lot of, you know, yes, we would traditionally come to the tables with our policy knowledge, and our human rights experience, and those sort of elements, but I think, you know, we’ve tried a lot of different things in terms of integrating expertise into our teams, integrating expertise into our consultations, so you have to sort of think about what it’s going to mean in a policy world to now do this, and finally, I’ll just say, again, leveraging the structures that are in place. We have to optimize what we have. It’s, I think, sometimes easier to say, well, it’s broken, and let’s create something new, but I do want to think that we can continue to optimize, and if we’re going to create something new, we, again, it’s a conscious decision to think about what is missing from what we have that needs to be improved upon. Perhaps I’ll stop there.

Ian Barber:
Thank you, Tara. That was great and really comprehensive. I think in the beginning, you alluded to the challenges in applying the international human rights system to the work that you’re doing, but I’m glad Canada is very much doing that and taking this multi-pronged approach. approach that does put human rights front and center, both the national and international levels. And I really agree that there is very much a need to have deliberate stakeholder engagement and appreciate the work that you’ve been doing on that. And also the need to leverage existing structures and ensuring that these conversations are truly global, inclusive, and ensuring that the expertise is there as well. So thank you so much. And I think your comments on UNESCO actually serve as a perfect segue to my next prompt, which I’ll be turning to Pratek to discuss a bit about that. So UNESCO developed the recommendations on the Ethics of AI a couple of years ago. I think as it’s been alluded to, that the conversation has kind of gone from, do we need voluntary things to, or self-regulatory or non-binding to do we perhaps need more binding? And I think that is very much the direction to travel now. But I’m curious to hear from you a bit more about your experience at UNESCO in terms of implementing the recommendation at this point and how UNESCO in general will be playing a larger role in AI governance moving forward and on human rights. So thank you.

Pratek Sibal:
Thanks Ian. How much time do I have? You have five to six minutes, but there’s no rush. I wanna hear your comments and your interventions. First of all, thanks for organizing this discussion on human rights-based approaches to AI governance. I will perhaps focus more on the implementation part and share some really concrete examples of the work that we are doing with both rights holders and duty bearers. Perhaps first, it’s good to mention that the recommendation on the Ethics of AI is human rights-based. It has human rights as a core value and it is really informed by human rights. Now, I would focus more on the judiciary first. So while we are talking about development of voluntary. frameworks, non-voluntary binding and so on, there’s a separate discussion about whether it’s even possible in this fractured world that we are living in to have a binding instrument, it’s very difficult. It’s not a choice, if you are going to go and negotiate something today, it’s very difficult to get a global view. So we have a recommendation which is adopted by 193 countries, so that’s an excellent place to start with, and I’m really looking forward to the work that colleagues at the Council of Europe are doing to have a regional and also they work with other countries. Now, so we started to also, in my team, in my files, looking at the judiciary, because you can already start working with duty bearers and implement international human rights law through their decisions. But the challenge that you face is that a lot of times they don’t have enough awareness about what AI is, how does it work, there’s a lot of myth involved. And there is also this assumption that technologies out there, it will, if you’re using an AI system or in a lot of countries, they’re using for predictive purposes, they will be like, oh yeah, it’s the computer algorithm which is giving the score, it must be right. So all these kind of things need to be broken down and explained, and then the relevant links with international human rights law needs to be established. This is what we started to do in some time around 2020. We at UNESCO have an initiative called the Global Judges Initiative. Which started in 2013, where we are working on freedom of expression, access to information and safety of journalists. And through this work, we’ve reached about 35,000 judicial operators in 160 countries through both online trainings in the form of massive open online courses to in-person trainings, to helping national judicial training institutions develop curriculum. Around 2020, we started to discuss artificial intelligence. Of course, the recommendation was under development and we were thinking already about how can we actually implement beyond the great agreement that we have amongst countries. And we first launched a survey to this network and about 1200 judicial operators, when I say judicial operators, judges, lawyers, prosecutors, people working in legal administrations responded to this survey from about 100 countries. And they said two things. First, we want to learn on how AI can be used within the judicial processes and the administrative processes, because in a lot of countries, they are overworked and understaffed. And I’ve been talking to judges and they’re like, yeah, if I take a holiday, my colleagues have to work like 16 hours a day. And that is a key driver for them to look at how can the workload be streamlined. The next aspect is really about what are the legal and human rights implications of AI. And when it comes to say, freedom of expression, safety of access to information. Let me give you some examples here. So we have, for instance, in Brazil, there was a case in the Sao Paulo metro system, they were using facial recognition system on their doors to detect your emotions and then show advertisement. And so, I think it was a data protection authority in Brazil, which said that you can’t do that. You have no permission to collect this data and so on. And this did not require really an AI framework. So my point is that we should not think in just one direction that we have to work on a framework and then implement human rights. But we already have international human rights law, which is part of jurisprudence in a lot of countries, which can directly be used, actually. So let’s not give a lot of people the reason to wait. Let’s have a regulation in our country. Giving you some other examples, we’ve seen in Italy, for instance, they have these food delivery apps like Deliveroo, and there’s another one called Foodino. And they had two cases there where basically one of those apps, I don’t remember which one, was penalizing the food delivery drivers if they were canceling their scheduled deliveries for whatever reason. And they were giving them a negative score. So the algorithm was found to be biased. It was rating those who canceled, giving more negative points to them vis-a-vis the others. And the Data Protection Authority basically said from the GDPR that you cannot have this going. We had the case Marlena was mentioning about facial recognition in the public sphere. I think it was in the UK, the South Wales Police Department was using facial recognition systems in the public sphere. And this went to the Court of Appeals, and then they said, oh, you can’t do this. So this is the work. These are just examples of what is already happening and how people have already applied international human rights standards and so on. Now, what are we doing next? So in our program with our work with the judiciary, we launched in 2022 a massive open online course on AI and the rule of law, which covers all these dimensions. and we made it available in seven languages. And it was kind of a participative dialogue. We had the president of the Inter-American Code of Human Rights, we had the Chief Justice of India, we had professors, we had people from the civil society coming and sharing their experiences from different parts of the world, because everyone wants to learn in this domain. There’s like, as Canada, you were mentioning, there’s a lot of scope to learn from what other practices in other countries. And so that was our first product, which reached about 4,500 judicial operators in 138 countries. Now we realize that doing individual capacity building is one thing, but we need to focus more on institutional capacity building, because that’s more sustainable in the long term. So we’ve now, with also the support of the European Commission, developed a global toolkit on AI and the rule of law, which is essentially a curriculum, which has four modules, which is talking about human rights impact assessments that Marlena was talking about before. We are actually going to go to the judiciary and say, okay, this is how you can break things down. This is how you look at data. But what is the quality of data? When you’re using an AI system, how do you check whether the algorithm is, what was the data use, whether it was representative or not? So we are breaking these things down practically for them to start questioning, at least. You don’t expect judges to become AI experts at all, but at least to have that mindset to say, oh, it’s a computer, but it is not infallible. So we need to create that. So we have this curriculum, which we developed through also almost a year long process now of reviews and so on. Now we have the pilot. toolkit available, which we are implementing first with the Inter-American Court of Human Rights in November, actually next month, for a regional training. We will also then get their feedback because it’s important to work with the community on what works for them, also from the trainers. We are going to hopefully do it for the EU. We are going to do it in East Africa with the East African Court of Justice next year. In fact, we are hosting a conference with them later this month in Kigali. So we are at this moment now piloting this work, organizing these national and regional trainings with the judiciary, and then as a next step, hoping that this curriculum is picked up by the national judicial training institutions and integrated. And then they own it, they shape it, they use it. And that is how we see that it becomes international human rights standards, percolating down to enhanced capacities through this kind of a program. And also as an open invitation, the toolkit that we have, we are just piloting it. So also open to having feedback from the human rights experts here on how we could further improve and strengthen it. So I think perhaps I’ll briefly just mention the rights holders side. And we’ve also developed some tools for basically youth or even general public, you could say, to engage them in a more interesting way. So we have a comic strip on AI, which is now available in English, French, Spanish, Swahili. And I think there’s a language in Madagascar that is also, and in German and Slovenian soon. So these are tools that we make available to the communities to also then co-own, develop their language versions, because Part of strengthening human rights globally is also to making that content available in different languages. So people who can associate with it better. We have a course on defending human rights in the age of AI, which is available in 25 languages. It’s a micro learning course on a mobile phone that we developed in a very collaborative way with UNITAR, which is a United Nations training and research institution, as well as a European project called Saltopi, which involved youth volunteers who wanted to take it to their communities and say, oh, actually, in our country, we want this to be shared, and so on. So there are a number of tools that we have, and then communities of practice with whom we work on capacity building and actually translating some of this high-level principles, frameworks, policies into, hopefully, a few years down the line, into judgments, which become binding on governments, on companies, and so on. I’ll stop here. Thank you.

Ian Barber:
Thank you. That’s great. And thank you for reminding us that we already do have a lot of frameworks and tools that can already be leveraged and are taking place in the domestic context as well. Really commend on your work on AI and human rights in the judiciary. I think that it’s important to consider that we do need to work on the institutional knowledge capacity that you were speaking to and also working with various stakeholders in an inclusive manner. So thank you. At this point, we’ve heard from Marlena about what’s truly needed from a human rights-based approach. AI governance, we’ve heard from Tara what some governments and states like Canada are doing to champion this approach in some ways, domestic and national levels. You’ve heard from Pratik about the complementary work that’s being done by international organizations and the implementation and work happening there. So I think I want to pause at this point to see if anyone on the panel has any immediate reactions to anything that’s been said. And then we might have time for one quick question before we change directions a little bit. But if there are any immediate reactions, feel free to jump in. If not, that’s OK, too. And from online, if not. So, yeah, we can also go to a brief question, if that’s possible, please feel free to jump in. I think there’s a microphone there, but we can also hand one over. If you could introduce yourself, that’d be great too, thank you.

Audience:
Okay, thank you. I’m Stephen Foslu, I’m a policy specialist at UNICEF, and it’s really great to hear about the different initiatives that are happening and the different approaches. And maybe it’s natural, like in the previous session, Thomas Schneider was saying, it’s natural that we will see many different governments and countries approaching this differently, because nobody really knows how to do this. So this is more kind of a, I guess, a request just to think about not just the what, the governance, but also the how, and to do analysis of these different approaches and to see what works from voluntary codes of conduct to more kind of industry-specific legislation. And I think that’s almost really the next phase as we go from policy to practice. And this will play out over a number of years, but that would really be helpful from the UNESCOs and from the OECDs, who are already starting to build up this knowledge base. But clearly, there are going to be some things that work well and some that don’t. We also engage children. We did a policy, created a policy guidance on AI for children, and engaged children in that process. And it was a very meaningful and necessary process that really informed and enriched the product. So it’s really encouraging to hear about the multi-stakeholder approach that’s ongoing, not just ad hoc. But yeah, that’s kind of a request. And perhaps if you have any thoughts on kind of how you see these approaches may play out if we look ahead, and what kind of role the organizations that you’re in might play, not just kind of documenting and looking at how… how it may be, what may be governed, but actually, and how. Thank you.

Tara Denham:
First of all, as a mom, I would love to see that information about AI in children. That’s fantastic. But it did, on your comment about needing to do the analysis about what’s working and what’s not, I think one, and again, this is where we need to also build that capacity globally, because it’s one thing for Canada to do analysis and maybe what’s working in Canada, but we have to really understand what are the risks, how is it impacting in different communities and different countries. But this is where we have been working, and I don’t know if there’s any colleagues in the room, but we have the International Development Research Centre in Canada, IDRC, and they do a lot of the funding and capacity building in different nodes around the world and specific on AI capacity building and research. And so that’s where we’ve also had to really link up so that we can be leveraging as fast as possible the research that they’re also supporting. So again, it’s just always, again, it’s challenging ourselves as policymakers that we have to keep seeking it out, but there is that research and we just need more of it. I think I just wanted to advocate for that. Thank you.

Marlena Wisniak:
Yeah, thanks so much for that question. Definitely support multistakeholder participation and engaging stakeholders in the process of policymaking itself. One challenge that we see a lot is that there’s no level playing field between different stakeholders. So I don’t know if there are many companies in the room, but we often see that companies have a disproportionate advantage, I’d say, financial and access to policymakers. When I mentioned at the beginning of my intervention that there’s a handful of human rights folks that participate in AI governance, it really is another statement comparing to hundreds of actually thousands. of folks in the public policy sector of, or section of companies. So that’s something that I would urge international organizations and policy makers at the national level to consider that civil society really comes from, it’s an uphill battle in terms of capacity, resources, financial, and obviously, these are marginalized groups and global majority-based orgs are disproportionately hit by that. So Canada, as a Canadian, as Canadian government, I imagine you’re primarily engaged with national stakeholders, which is obviously important, and I also encourage you to think how Canadian laws can influence, for example, global majority-based regulation. That’s something we think about a lot in the EU with the so-called Brussels effect, understanding that many countries around the world, especially those with more repressive regimes or authoritarian practices, do not have necessarily the democratic institutional pillars that the EU or Canada would have. So that just added nuance to multistakeholderism, yes, and in a way that really enables inclusive and meaningful participation of all. Thank you.

Pratek Sibal:
So couple of quick points. First, also, on Canada, I think they’re doing a fantastic job in, for instance, Africa and Latin America with AI for Development project, and I have seen since 2019 the kind of communities that have come up and have been supported to develop, say, language datasets, which can then lead to development of applications or in healthcare or in agriculture or just to strengthen in a more sustained way capacities of civil society organizations that can inform decision-making and policy-making, and we at UNESCO have particularly also benefited from this because when we have the recommendation on the ethics of AI, which is being implemented. in a lot of countries, we work in a multi-stakeholder manner, right? We generally have a national multi-stakeholder group which convenes and works. And there, the capacity of civil society organizations to actually analyze national context, contribute to these discussions is very important. So the work that Canada or IDRC and so on are doing is actually, I have over the past four or five years seen results of that in my work itself already. So there’s good credit due there. On your point about policymaking at the international level and recommendations and so on, I think, so the process of international standard and policymaking has kind of evolved over the years. Like we used to be in a mode of technical assistance many years ago that someone will go to a country and help them develop a policy, an expert will fly in, stay there for some months and work. I think that model is changing. And that model is changing in the sense that you are developing policies or frameworks, I would say, at the global level with the stakeholders from the national or whatever level involved in the development of these frameworks. So what happens is that when they are developing something at the global level and when they have to translate it at the national level, they would naturally go towards this framework on which they have worked and they have great knowledge of. And that is one, it’s an implicit way of policy development which is over the few years that, not few, it’s been actually since the early 2000s, this is the model, because otherwise there’s not enough funding available and also it’s not sustainable because you don’t develop global frameworks which are done in a more consultative way. manner. So there is more ownership of these frameworks, which are then become the natural tool, go to tool for at the national level as well. So that’s, I think, an interesting way to develop. And that’s why we are talking about multi-stakeholderism. A lot of times in fora like this, multi-stakeholderism just becomes a buzzword. Yes, we should have everyone on the table. That is not what it means. We need to be, and we’ve actually produced a guidance on how to develop AI policies from drafting, from agenda setting to drafting to implementation and monitoring along the policy cycle in a multi-stakeholder manner. And there’s a short video also I’m happy to share later, if we can share it with the community. Thank you very much.

Ian Barber:
I know we have one speaker. Just really quickly, if you could make your question and then I have three more interventions from people including online. So maybe they can consider your question and their responses. And if not, then we can come back to it at the end. I just want to ensure that we make time for them. So if you can be brief, that’d be very much appreciated. So I can ask a

Audience:
question. Okay, thank you so much. Svetlana Zenz, Article 19. I’m working on engaging TAG for internet freedoms on Asia countries, Myanmar, Vietnam, China. And my question actually is, I mean, I think it’s like more of a UNESCO and Canada at some point because, I mean, the ones who are providing some global policies. Would you recommend some mechanisms which we could implement in authoritarian regime countries to monitor the responsible AI, especially from the private sector side? Because in the Western world or the world which is like more human rights friendly, it’s more easier to implement those policies rather than in authoritarian countries. Thank you. Thank you very much. We’ll be coming back to these questions

Ian Barber:
as well and I think that’s actually a little bit of a good segue to the next intervention. I’m going to turn to Shala who’s joining online from Google, from the private sector, as it’s important to consider also. stakeholders in the room. Shel, if you’re connected with us, I think my question for you is, aside from these government and the multilateral efforts, it’s obviously clear that the private sector plays a key role in promoting human rights and AI governance frameworks. So if you could speak about, really, your work at Google, what’s its perspective and ongoing efforts on AI governance and how you’re working to promote human rights. And if you can speak to the questions, it’s been asked, that’d be fantastic as well. Thank you so much for joining and your patience.

Shahla Naimi:
Sure, thank you so much for having me today. And apologies, I was unable to join in person. But I really do appreciate the chance to join virtually. I’ll try to keep this brief. I want to make sure we get to a more dynamic set of questions. And I know there are other speakers as well. But to take a step back, I sat on Google’s human rights program. And that is, for those who are not familiar, it’s a central function responsible for ensuring that we’re upholding our human rights commitments. And I can share more on that later. But it really applies across all the company’s products and services across all regions. And so this includes overseeing the strategy on human rights, advising product teams on potential actual human rights impacts. Quite relevant to this discussion, it’s conducting human rights due diligence and engaging external experts, rights holders, stakeholders, et cetera. And so maybe just to take a brief step back, I’ll just share a little bit of our starting point as a company, which is really true excitement about the ways that AI can advance human rights and really create opportunities for people across the globe. And so I think that that doesn’t just mean in terms of potential advancements, but really progress that we’re already seeing putting more information in the hands of human rights defenders in whatever country they are in, keeping people safer from floods and fires, particularly knowing that it affects disproportionately the global majority, increasing access to health care, one that I’m particularly. excited about is something we call our 1,000 languages initiatives, which is really working on building AI models that support the 1,000 most widely spoken languages. We obviously live in a world where there are over 7,000 languages, and so I think it’s a drop in the bucket, but we hope that it’s sort of a useful starting point. But to sort of, again, turn to our topic at hand, none of this is possible if AI has not developed responsibility, and as was sort of noted in the introduction, this really is an effort that necessarily needs to have government, civil society organizations, and the private sector involved in a really deeply collaborative process, maybe one that we haven’t even seen before, potentially. For us as a company, the starting point for responsible AI development and deployment is human rights. So for those who are maybe less familiar with the work that we do in this space is, you know, Google’s made a number of commitments to respecting the rights enshrined by the Universal Declaration of Human Rights, which is turning 75 this year, and it’s implementing treaties, as well as the UN guiding principles on business and human rights, which I think Marlena mentioned in the beginning. So, you know, what does that actually look like in practice? So, you know, as part of this, years ago in 2018, when we established our AI principles, we embedded human rights into them. So for those who are not familiar, our AI principles describe our objectives to develop technology responsibly, but also outline some specific application areas that we will not pursue, and that includes technologies whose purpose contravenes international law and human rights. So if I’m kind of providing a bit of a tangible example, let’s imagine that we’re sort of thinking of developing a new product like BARD, which we released earlier this year. This would go through our AI principles review via our responsible innovation team, and as part of that process, my team would also conduct human rights due diligence to identify any potential harms and develop alongside various themes, legal, and product teams in particular, appropriate mitigations around them. And so one example of this, which we can sort of share around, which is a public case study that we’ve released is around our celebrity recognition API. So back in, this would have been 2019, you know, we already saw that the streaming era had brought, you know, a really remarkable explosion of video content. And in many ways, that was fantastic. More documentaries, more access for filmmakers to sort of showcase and share their work globally and so on. But there was also a really big challenge, which was the video was pretty much unsearchable without, you know, expensive, labor-intensive tagging processes. This made it really difficult and expensive for creators. So, you know, a discussion popped up about better image and video capabilities to recognize sort of an international roster of celebrities as a starting point. So our AI principles review in this process triggered kind of additional human rights due diligence, and we brought on Business for Social Responsibility, BSR, which some are familiar with, to help us conduct sort of a formal human rights assessment on the potential impact of a tool like this on human rights. Kind of fast forward, the outcome of this was a very tightly scoped offering, one that defined celebrity quite carefully, established manual customer review processes, instituted really an expanded terms of service. All of this actually ended up also later forming our company-wide stance on facial recognition, and, you know, took into consideration quite a bit of stakeholder engagement in the process. Though it was developed more recently than this particular human rights assessment, I’ll also plug in the ECNL framework for meaningful engagement, because it served as a really helpful guide for us since it’s released. So I just want to share this example for two reasons. One is just human rights and sort of the established ways of assessing impact on human rights have been embedded into our internal AI governance processes from the… beginning. And two, as a result of that, we’ve actually been doing human rights due diligence on AI related products and features for three years. And that’s been a priority for us as a company for quite a long time. To sort of take a very brief kind of note to to sort of your the second part of your question. I’ll just sort of flag that I think we really do need everybody at the table. And that’s not always the case right now, as as others had mentioned, you know, we were excited, just as an example, to be part of the moment at the White House over the summer at the US White House over the summer, that brought together industry to commit to advancing responsible practices in the development of AI. And earlier this fall, we did sort of release our company’s progress against those commitments. And that included launching a beta of synth idea, a synth ID, which is a new tool we developed for watermarking and identifying AI generated images, a really core component of informing the development of that particular product was concerns from civil society organizations and academics, and individuals and sort of the global majority keeping in mind that we have 75 elections happening globally next year, really concerns around misinformation and the potential proliferation of misinformation, establishing and a dedicated AI red team, co establishing the frontier model form to sort of develop standards and benchmarks for emerging safety issues. But we’re, you know, we think these commitments and companies progress against them is an important step in the ecosystem of governance, but they really are just a step. So we’re particularly eager to see kind of more space for industry to come together with governments and civil society organizations, more conversations like this. I think Tara mentioned the Freedom Online Coalition. So it could be through existing spaces like FOC, or the Global Network Initiative, but also, you know, potentially new spaces, as we find that it’s necessary. And so I’ll just kind of mention one last thing briefly, because I know where I’m probably over my time. because it did sort of come up more specifically. I’ll just flag that when developing AI regulation at Google at the very least, we sort of think about it in a few ways. We’ve been thinking about it as something called the four S’s. You know, the structure of the regulation. Is it international? Is it domestic? Is it vertical? Is it horizontal? The scope of the regulation, how’s AI being defined? Which is not the easiest thing to do in the world. The subjects of regulation, developers, deployers, and finally the standards of the regulations. What risks, how do we consider those difficult trade-offs that were mentioned earlier by some, I think the person who asked the first question. So these are just sort of some of the things that we’re taking into consideration in this process, but we’re really hoping that more multi-stakeholder conversations will lead to some international coordination on this front, because our concern is that, you know, otherwise we’ll have a bit of a hodgepodge of regulation around the world. And in the worst case scenario, I think it makes it difficult for companies to comply and stifles innovation, potentially cuts off populations from what could be potentially transformative technology. And it might not be so much the case for us at Google where we’ve, you know, we have the resources to make significant investments in compliance and regional expertise, but we do think it would be, could be a potential issue for smaller players and sort of future players in this space. So I’ll pause there because I think I probably took up too much time, but I appreciate it and looking forward to the Q and A.

Ian Barber:
Thank you so much for that overview. That was great. And thank you for highlighting the work that’s happening at Google to support human rights in this context, particularly you’re working on due diligence, for example, as well as you noting the need for collaboration and considering global majority perspectives. I think that’s key as well. So what I’d like to do now is turn to Vladimir as our second to last intervention of the session, and then hopefully turning to a couple of questions at the end. I think that we’ve heard from a couple of different stakeholders. at this point, but I think the question for you is, do you think that the global majority is able to engage in these processes? Do you think that they are able to effectively shape the conversations that are happening at this point? And I think that, you know, that I chose to see Dallas has spoken about the need to consider local perspectives and I’m curious to hear from you is, why is this so critical and kind of what is the work that you’re doing now? And if we can keep an intervention to about four or five minutes, that’d be fine, but don’t wanna cut you off, thank you. Okay, I’ll try to be brief. Well, first of all, thank you so much for the question.

Speaker:
It’s a hard question and also for the invitation to be part of this panel, I’m very glad to be here. I’m Vladimir Garay, part of Derechos Digitales, Latin American digital rights organization and for the last couple of years, we’ve been researching about the deployment of AI systems in a region in the context of public policy. Part of that work has been founded by IDRC, so thank you. And I’m gonna tell you a little bit more about that later, but if you’re interested, you can go to ia.derechosdigitales.org and if the URL in Spanish confuse you, you come to me and I can give you one of these and you can find it more easily. So regarding your question, even though there are interesting efforts being developed right now, I think Latin America mostly have lacked the ability to meaningfully engage and shape processes for responsible AI governance and this is consequence of different challenges faced by the Latin American region on the local, the regional and the global context. For example, on the local context, one of the main challenges has to do with the designing of governance instances that are inclusive and that can engage meaningfully with a wide range of actors, which is at least partly consequence of a long history of authoritarianism that results on frail democracies that are suspicious of. participation, that are dismissive of human rights impacts, or that lack the necessary institutional capacities to implement solutions that acknowledge broad, inclusive, transparent participation. On the global context, we have to address the eagerness of the tech industry for pushing aggressively a technology that is still not completely mature in terms of our understanding of it, how we think about it, how we think about its limitations, and how do we demythologize it. And one of the consequences of this is the proliferation of different proposals for guidance, legal, ethical, and more, so many that it’s hard to keep up. So there’s a sense of overwhelming necessity and inability, which is a difficulty in itself. Now, also in the global context, I think Latin America and global majority perspectives are often overlooked and disregarded in the international debate about technology governance, probably because from a technical or an engineering standpoint, the number of artificial intelligence systems that are being developed in Latin America might seem marginal, which is true, especially when compared to those created in North America, Europe, and part of Asia. But our region has a fundamental role in the production of AI systems, and a better understanding of global majority and Latin American countries’ relationship with AI can be illuminating, not just for Latin America, but for the AI governance fields as a whole. How should it look like and what should it include? So first, I think it’s important to consider the different roles of global majority countries, and in particular, Latin American countries, in the global chain of artificial intelligence development. Our region has a fundamental role in the production of AI systems, for example, as a provider of lithium and other minerals necessary for the manufacturing of different components of AI systems. Of course, as you all know, Mining consumes big amounts of non-renewable energy and has important environmental impacts, including air pollution and water contamination that can lead to the destruction of habitats

Ian Barber:
and the loss of biodiversity. It also has a severe impact on the health of the miners, many of whom work in precarious conditions. Latin America also provides data, raw data, that is collected from different sources by different means and that is used to train and refine AI models, data that is often collected as a consequence of the lack of proper protection of people’s rights to their personal information. And most of the time, people’s data get input on AI systems without people’s consent or even their knowledge. Latin America also provides labor, labor necessary to train AI systems by labeling data for machine learning. These are usually low-paid jobs performed also under very precarious conditions that can have harmful impacts on the emotional and mental health of people, for example, when reviewing data for content moderation purposes. It is also the very foundation of any AI system, but its value is severely underestimated and not properly compensated. In summary, Latin America provides material resources necessary for the development of AI systems that are being designed somewhere else and later sold back to us and deployed in our countries, perpetuating logics of dependency and extractivism. So we are both the providers of the inputs and the paying clients for the outputs, but the processes that determine AI governance are often far removed from our region. In general, AI governance should consider the different impacts of AI development on human rights, including the ones that are a result of the extraction of these material resources, environmental human rights, workers’ rights. and the right to data protection, privacy, and autonomy, which are greatly impacted in regions like Latin America. Now, at Derecho Digitales, we have been looking into different implementations of AI systems through public policy, because the main way most people interact with this type of technologies in a region is in their relationship with the state, even if they’re not always aware of this. And what we’ve seen is that states are using AI for mediating the relationship with citizens, for surveilling purposes, for making decisions regarding welfare assistance, and for controlling the access and the use of welfare programs. However, most of the time, our research shows that these technologies are deployed without meeting transparency or participation standards, they lack human right approaches, and do not consider open, transparent, and participatory evaluation processes. There are many reasons for this, from corruption to the lack of capacities, and disregard for human right impacts, as I mentioned earlier. But we need to overcome this reality,

Speaker:
which implied to address the asymmetries among different regions related to the strengthening of democratic institutions. International cooperation is key, and civil society organizations in the region are playing a major role promoting that change. So I’ll keep it here for now. Thank you.

Ian Barber:
Thank you, Vladimir, for speaking about the need for regional perspectives and highlighting how these need to feed into global conversations, and including specifically how regional developments are necessary to consider in the context of AI development. I think that’s really helpful. I’m gonna turn to our last speaker now, Oyebisi, who I believe is joining us from about 5 a.m., and has been online for a very long time, so definitely deserves a round of applause, so last but definitely not least. So my question to you finally is, building on the previous comments, how do we ensure that, similarly, that African voices are represented in efforts on responsible AI governance and to promote human rights? And I’m gonna weave in a question from online that we’ve received as well, which I think might be related if you’re able to respond to that as well, which is, what suggestions can be given to African countries as they prepare strategies or policies on emerging technologies such as AI, specifically considering the risks and benefits? So again, thank you so much for your patience and thank you for being with us. Cheers.

Oluseyi Oyebisi:
Yes, and thank you so much, Haiyan, for inviting me to speak this morning. I think in terms of African voices, we all would agree that the African region is coming late to the party at this time. And we now need to find a way of peer pressuring the continent to get into the debate. Doing this would mean that we are also doing ourselves as other regions a favor, understanding that the continent has a very huge population and that human rights abuses on the continent itself would also snowball into developmental challenges that we do not want across the world. So this is the context for which we would have to ensure that we are not leaving the African continent behind, especially given the fact that our governments have not been able to figure. And this would speak to the question that has been asked by that colleague. Our governments have not prioritized the governance of AI. Of course, we need to think of the governance of AI within the hard and the soft slot, but also understanding the life cycle of the AI itself. And how do we ensure that along all of the life cycle, we have a government that understands that, a civil society organization as well that understands that and a business that understands that and was great listening to. the colleague from Google who was talking about how Google has a human rights program. How do we then, within a multi-stakeholder approach, bring that understanding to anticipate some of the rights challenges we might see with artificial intelligence, but also then plan as a truly multi-stakeholder approach to be able to mitigate those. And this is where governments would now need to see civil society organizations not as enemies, but as allies, and helping to bring those voices together. Of course, we should understand that at some point, the politics of AI would also come to bear because on the continent itself, we do not have all of the resources in terms of intellectual property to be able to develop the coding and all of these algorithms that follow that. Our universities are not prepared for that yet. But again, dealing with the technicalities as well, we have to also build some level of competence. Plus also understanding that in terms of international governance of AI and the setting up of international bodies, the African region would have to ensure that our missions abroad, especially those that would be relating with the UN, must have the right capacity to take part in the negotiations. And that’s why, again, I like how a colleague from Canada said that we would have these contestations and they are very necessary because it is within these contestations that we’ll be able to bring the diversity of opinions and thoughts to the table, such that we have policies that can help us to address some of these challenges that we might see now and in the future. But how are we going to prepare ourselves as Africans to be able to negotiate and negotiate better? And this speaks to the role of the African Union. including ECOWAS and other regional bodies. I do think the European Union is also setting the agenda and the kind of model for Africans and other regions to also follow in terms of the deep dive that they’ve done with the AI treaty and how they are using that to help shape how we can have a good human rights approach to AI itself. So now answering the question directly that you posed to me is to say that whatever advice we would give the African government would also be within the context of what we have seen. I want us to understand that hard laws may not necessarily be the starting point for African government. It might be soft laws, working with technology platforms to look at code of conducts and using lessons from that to progress to add laws. Of course, also understanding that governments must begin to think regulation in ways that balances the need of citizens and some of the disadvantages that you do not see or we do not want to see, but that we bring citizens themselves into the conversation such that we are also encouraging innovation. As much as we’re encouraging innovation, we’re also ensuring that the rights of others are not abused. It’s going to be a long walk to freedom. However, that journey must start with Africans, African civil society, African businesses, African governments investing in the right set of meetings, investing in the right set of research, investing also in the right set of engagements that can get us, again, to become part of the global conversation, but also understanding that. the regional elements of the conversations also must be taken on board. Especially given the fact that human rights abuses across the region is becoming alarming and that we now have more governments that are interested in not opening the space, not you know being intrusive, rather you know they want to muffle voices, you know, they also are not opening freedom of association itself is also affected. So when you look at the civic space ranking of civics for the region itself, it then again gives the picture as to how some way somehow as this some of these conversations might not necessarily be something that would excite the region. But again this is an assumption, we can still again begin to look for that stakeholder pressure in ways that brings the African governments to the table, in ways that helps them to see the need for this and also the need for us to get our voices into global platforms.

Ian Barber:
Thank you Oyebisi, it’s great and thank you for stressing again the importance of the multi-stakeholder approach, the need for civil society and governments to work together and bringing in this diversity of perspectives and African voices and governments to the table which requires preparation as well. So thank you. I guess to the organizers in the IGF, I’m not sure what the timing is in terms of whether we’ll be kicked out of the room or not, so if there’s a session immediately afterwards I’m not entirely certain but I don’t see anyone cutting me off. I think it’s a lunch break, so what I’ll do is I’ll just say some brief final comments and then if anyone has any particular questions or wants to come up to the speakers that might be a more helpful way of moving forward. I don’t want to stand in between people and their food, never a good position to be in. Pratek if you want to make one final… I think there was a question from…

Pratek Sibal:
I mean, I have no answer, but I think it’s an important question. So we, if I think it’s always tricky, particularly when we are dealing with authoritarian regimes and to put in frameworks, which may be used in whatever way possible. So I have no answer, but I think it’s an important question. So we should give some time to that.

Ian Barber:
Thank you. I just want to say that I think we began this session with a really crucial acknowledgement that there are truly glaring gaps in what is existing in the discourse between human rights and AI governance, and that it’s a really key for all stakeholders to come in for global perspectives from the industry, from civil society, from governments, from other champions on these issues. I think we’ve just started to shine a spotlight on these issues. So I think that we’ve also journeyed through what is really needed in terms of looking at a human rights approach to AI governance. I think it’s one piece of the pie, but a critical one. And I think that it’s just key that we continue to firmly root all efforts on AI governance in the international rights framework. So thank you so much to the speakers in person here and those online. Thank you for your patience and apologies for going over and apologies for not being able to field all the questions. But I would encourage you guys to continue to come up personally and speak to speakers yourself. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

450 words

Speech time

160 secs

Ian Barber

Speech speed

203 words per minute

Speech length

3949 words

Speech time

1168 secs

Marlena Wisniak

Speech speed

169 words per minute

Speech length

1895 words

Speech time

671 secs

Oluseyi Oyebisi

Speech speed

156 words per minute

Speech length

1058 words

Speech time

407 secs

Pratek Sibal

Speech speed

168 words per minute

Speech length

2632 words

Speech time

941 secs

Shahla Naimi

Speech speed

197 words per minute

Speech length

1782 words

Speech time

542 secs

Speaker

Speech speed

171 words per minute

Speech length

680 words

Speech time

239 secs

Tara Denham

Speech speed

195 words per minute

Speech length

2361 words

Speech time

728 secs

The road not taken: what is the future of metaverse? | IGF 2023 Networking Session #65

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The discussion revolved around various significant issues concerning the metaverse. One key point raised was the presence of structural disadvantages in the adoption of metaverse enabling technologies. It was pointed out that these technologies are primarily developed in countries with high rates of IT development, placing developing countries at a disadvantage. It was acknowledged that developing nations need to catch up to match the level of technological sovereignty and metaverse connectivity that Western countries have achieved.

The importance of regulation for the metaverse was heavily emphasized. Regulation was seen as crucial for ensuring the value proposition and continuous growth of the metaverse. It was noted that the development of digital platforms has been accelerated by COVID-19 pandemic. However, concerns were raised regarding the need to address standardisation and interoperability issues, as well as regulatory challenges associated with generative AI. These challenges underscored the necessity of effective regulation to navigate and address the complexities of the metaverse.

The absence of regulation for current metaverse and IT companies was highlighted as a concerning issue. It was noted that these companies operate without specific jurisdiction, leading to a lack of understanding regarding their regulatory framework. Furthermore, it remains unclear whether metaverse companies should offer digital citizenship, further complicating the regulatory landscape. The need to establish clear regulations and frameworks for metaverse and IT companies was deemed essential to mitigate potential risks and ensure accountability.

Privacy and jurisdiction concerns were also brought to attention. It was argued that digital citizenship in the metaverse raises questions regarding privacy and jurisdiction, demanding robust resolution. The implications of privacy, jurisdiction, and applicable law in the metaverse need to be properly addressed to foster the safe and secure environment for users.

On a positive note, it was mentioned that there is existing legislation that can be applied to the metaverse, depending on the specific use case. Examples of existing regulations include those governing personal data, digital identities, electronic signatures, and payment interoperability standards. It was also noted that the hosting of personal data, whether in the metaverse or not, is governed by certain regulations. This recognition of existing legislation provided a ray of hope in terms of navigating the regulatory landscape of the metaverse.

The discussion also delved into the concerns surrounding the conflation of religious beliefs and technological advancements. It was highlighted that this can potentially challenge the structure of human personality. The importance of distinguishing the real world from the virtual world and the potential dangers of blending religious dogmas with technology were emphasised.

Technical challenges were also addressed during the discussion. It was mentioned that one potential bottleneck limiting the growth of the metaverse is lag or delay in connections. This issue needs to be properly addressed to ensure smooth and seamless user experiences within the metaverse.

The topic of regulation for safety was explored, with an emphasis on the limitations of relying solely on regulation. It was argued that regulation is often influenced by lobbying and tends to be abstract, while violations are concrete and precise. This highlighted the need to find a balance between regulation and direct accountability to ensure a safe environment within the metaverse.

The importance of holding platforms accountable was also emphasised. It was noted that technology plays a crucial role in collecting evidence, studying algorithms, and monitoring platform behaviour to effectively hold platforms accountable. This recognition highlighted the significance of technological advancements in ensuring platform accountability.

There were also specific discussions related to user experience and feedback. It was underscored that user experience is crucial and that having an individual log can be beneficial for both users and providers. User feedback was seen as essential for improving the metaverse and enhancing the overall user experience. The value of user feedback and the potential for using individual logs for accountability purposes were highlighted.

Other noteworthy observations included concerns about data collection and utilisation in the crypto metaverse, as well as the preference for quicker onboarding processes that do not gather excessive user data. Additionally, the abundance of digital assets generated by generative AI in the metaverse was seen as a potential threat to their value. It was estimated that the metaverse could be worth $5 trillion by 2030, but the abundance of digital assets could decrease their value.

In conclusion, the discussion surrounding the metaverse touched on a wide range of issues. It brought attention to the need to address structural disadvantages in technology adoption, regulate the metaverse to ensure its value proposition and continuous growth, resolve privacy and jurisdiction concerns, and distinguish the real world from the virtual world. Existing legislation was acknowledged as a potential framework for regulation, while technical challenges and user feedback were highlighted as important factors in the metaverse’s development. The discussion also raised concerns about data collection, asset value, and the impact of blending religious beliefs with technological advancements. Overall, the in-depth exploration of these various issues shed light on the complexities and considerations surrounding the metaverse.

Vakhtang Kipshidze

The Russian Orthodox Church recognizes the existence of the metaverse but asserts that it is a man-made and imperfect world that imitates God’s perfect creation. Vakhtang Kipshidze, a representative of the Church, shares this view and emphasizes that the metaverse is a human creation seeking perfection.

Kipshidze expresses concern about the metaverse becoming entirely secular, excluding religious values. He advocates for integrating religious values into metaverses to counteract religious exclusion and ensure inclusivity. This promotes peace, justice, and strong institutions within virtual worlds.

Kipshidze also raises concerns about the relationship between privacy and freedom in the metaverse. He highlights the close tie between privacy and freedom, warning that violating privacy in virtual environments can lead to a loss of individual freedom. It is crucial to establish privacy protections to safeguard personal freedoms in the metaverse.

Moreover, Kipshidze discusses the challenge of translating human encounters to the virtual realm. He argues that values like love may not have the same impact in virtual interactions as in face-to-face experiences within families and religious communities. Careful thought and consideration are needed to nurture important values in the metaverse.

Furthermore, Kipshidze expresses worry about the potential negative consequences of excessive immersion in the virtual world of metaverses. He believes that obsession with the metaverse can harm individual freedom and overall well-being. Balance and moderation are essential when engaging with virtual platforms.

Additionally, Kipshidze cautions against mixing religious and technological issues, such as digital immortality. He believes that combining religious and non-religious elements in virtual spaces could endanger the structure of human personality. This raises questions about the impacts of merging religious and technological concepts within the metaverse.

Finally, Kipshidze emphasizes the significance of distinguishing between the real world and the virtual world. He sees the issue of immortality as a challenge in differentiating the two realms. Bringing religious dogmas into the realm of technology should be avoided. Critical thinking and discernment are necessary when navigating the virtual landscape.

In summary, Vakhtang Kipshidze’s perspectives shed light on various aspects of the metaverse. The Russian Orthodox Church recognizes the metaverse as a man-made and imperfect creation. Kipshidze’s concerns and recommendations revolve around integrating religious values, protecting privacy and freedom, nurturing important values, avoiding obsession with the virtual world, and maintaining a distinction between the real and virtual realms. These insights contribute to the ongoing discussion on the implications and impact of metaverses in society.

Alina

Regulating the metaverse, a virtual reality space where users interact with computer-generated environments and others, poses complex challenges due to jurisdictional uncertainty and the potential for companies falling under multiple jurisdictions. The metaverse operates globally, making it difficult to determine which laws and regulations should apply. This issue is further complicated by conflicting laws on technology, privacy, and security in different countries. Finding a consensus on metaverse regulation becomes a formidable task.

An important concern for regulation is the standardization process and interoperability. As the metaverse evolves, establishing common standards and protocols is crucial for seamless integration and communication between platforms and virtual worlds. This ensures consistent experiences for users across different environments. However, achieving standardization is complex and necessitates collaboration among stakeholders.

On a positive note, the metaverse holds the potential for digital immortality. Avatars in the metaverse can learn and mimic real-life individuals, allowing their existence to continue even after their physical demise. This raises philosophical questions about identity and ethical considerations regarding creating digital replicas of deceased individuals.

Additionally, the concept of a digital state and digital citizenship is emerging within the metaverse. Individuals can have a presence in multiple metaverses, similar to having dual or multiple citizenship in the physical world. This concept offers intriguing possibilities such as digital societies and rights and responsibilities for digital citizens. However, it also raises concerns about governance, accountability, and potential inequality or exclusion within virtual communities.

In conclusion, regulating the metaverse is complex due to challenges related to jurisdiction, standardization, and interoperability. The metaverse offers potential for digital immortality through avatar preservation and the emergence of digital states and citizenship. While these advancements present exciting opportunities, they also require careful consideration of ethical and societal implications. Policymakers, industry leaders, and society as a whole must collaborate to shape the metaverse’s future while maximizing its benefits and mitigating risks.

Daniil Mazurin

AI plays a crucial role in the development of metaverses, as demonstrated by the integration of OpenAI’s ChatGPT into our daily lives. With over 180 million monthly users, ChatGPT showcases the widespread adoption of AI technology. The current metaverses built by companies like Meta or in the blockchain space, such as Sandbox or Decentraland, are unlikely to achieve mass adoption. This highlights the challenges and limitations that need to be addressed for metaverses to become widely accessible and appealing to the general public. The ideal metaverse should combine real-life experiences, virtual worlds, augmented reality (AR), and AI technologies. Meta’s Rayban AR glasses exemplify a product that integrates the metaverse into society by blending the virtual world with our physical reality. Proper regulation is essential to govern innovative technologies like the metaverse. Lessons from the crypto industry emphasize the importance of regulating such industries to ensure compliance with legal and ethical boundaries. The development and expansion of the metaverse face challenges related to processors and software technologies like Unreal Engine and Unity Engine. Powerful processing capacities are required for advanced virtual worlds, and accessing such metaverses without appropriate devices can result in a subpar experience. Effective user onboarding and verification processes are crucial for enhancing user interaction and platform security. However, concerns regarding privacy and data misuse arise when considering user data management. Addressing these concerns is integral to maintaining user trust and safeguarding personal information. In an ideal metaverse, digital assets should have a limited supply. This scarcity contributes to the creation of demand and enhances the value and ownership experience within the metaverse. Additionally, generative AI can be used by artists to enhance their artwork, rather than replacing them entirely. Furthermore, AI can be utilized to create digital immortality, where AI systems simulate deceased loved ones. This technology allows individuals to continue communicating with their loved ones even after their passing. However, acceptance and implementation may depend on religious and moral considerations. In summary, AI plays a significant role in metaverse development, manifesting in the integration of ChatGPT into our daily lives. However, current metaverses face challenges in achieving mass adoption. The ideal metaverse merges real-life experiences, virtual worlds, AR, and AI technologies. Proper regulation is necessary to balance innovation and mitigate risks. Advancements in processors and software technologies are essential for metaverse expansion. User onboarding and verification are critical for user interaction and platform security, but privacy concerns must be addressed. Scarcity of digital assets and the use of AI for digital immortality can enhance the metaverse experience.

Moderator

The analysis provides insights into various arguments and perspectives surrounding metaverse technology. One argument emphasises the importance of considering values and preserving freedom in the metaverse. It highlights that religious communities should be included in discussions about metaverse technology, as sometimes the metaverse can undermine religious values. The analysis suggests that the preservation of privacy in the metaverse can ensure the protection of freedom. However, it also cautions that an excessive obsession with the metaverse can have detrimental effects on freedom.

Another viewpoint discusses the opportunities and threats posed by metaverse technology. It acknowledges the potential for the metaverse to be utilised for educational and healthcare purposes, which can contribute to SDG 4 (Quality Education) and SDG 9 (Industry, Innovation, and Infrastructure). However, the analysis also recognises the potential for crimes and abuse in the metaverse, raising concerns about safety and ethics. It references a report from the Center for Global IT Cooperation, which provides analytical insights into the metaverse’s impact.

Additionally, the analysis raises concerns about the potential structural disadvantages of metaverse technologies for developing countries. It points out that most metaverse technologies are developed in high IT development countries, primarily in Western Europe, leaving developing countries at a disadvantage due to technological limitations. This observation aligns with SDG 10 (Reduced Inequalities) and SDG 9 (Industry, Innovation, and Infrastructure), advocating for more inclusive development and support for developing countries in adopting metaverse technologies.

Furthermore, the analysis advocates for the active involvement and regulation of metaverse technologies by the governments of developing countries. It argues that developing countries should prioritize the regulation of innovation to effectively navigate the challenges and opportunities presented by the metaverse. This viewpoint aligns primarily with SDG 9 (Industry, Innovation, and Infrastructure) and emphasizes the importance of government intervention for equitable development.

Lastly, the analysis stresses the necessity for audience engagement and idea sharing. It highlights the value of encouraging the audience to actively participate by raising their hand, sharing ideas, or asking questions. This perspective aligns with SDG 17 (Partnerships for the Goals), emphasizing the importance of collaboration and partnership to fully realize the benefits of metaverse technology.

In conclusion, the analysis of metaverse technology presents a diverse range of arguments and perspectives. It underscores the need to consider values and preserve freedom in the metaverse, highlights the opportunities and threats posed by metaverse technology, raises concerns about the potential structural disadvantages faced by developing countries, advocates for government involvement and regulation, and stresses the importance of audience engagement and idea sharing. Overall, this analysis offers valuable insights into the complex nature of metaverse technology and its implications for various stakeholders.

Session transcript

Moderator:
Good morning, dear colleagues. I’m glad to see everyone here today. We’ll have a discussion, networking session on the topic of the future of the metaverses. I would really recommend and urge everyone to sit closer to the presidium as I think that this format better be realized as a form of generic exchange of ideas rather than speakers speaking their prepared reports. But keeping this in mind, we still will have several speakers with prepared reports on the topics of the development of the metaverses, on the future of metaverses, on ethical reasons behind the development of such technologies, and general views. Some of our speakers are representatives of the civil society, others of the academia, and we also have several people who are involved in NFT and metaverse development projects. So hopefully, this session will be interesting, involving, and I really urge participation from everyone. Our first participant of the discussion is the member of the, you’ll be surprised, Russian Orthodox Church, Vakhtang Kipshidze. I think that Vakhtang has joined us online. Vakhtang, can you hear us? I can see that Vakhtang is online, but maybe he has some technical issues, and we should start with another speaker, Daniel Mazurin, who is also online. Almost all of our speakers are currently online. This should say something about the development of metaverses online already. Here I see. Daniel, are you with us? Okay. Okay, I can see. Vakhtang has joined us. It’s late night at Moscow, but still, thank you very much for joining IGF.

Vakhtang Kipshidze:
Good morning, dear colleagues. Thank you so much for inviting me to this forum. And first of all, I would like to start from saying that it is quite natural for the Russian Orthodox Church to take part in such discussions about metaverses, because technologies nowadays are so developed that actually religious communities cannot just stay aside of these discussions. And particularly, this is true about metaverse. What do we consider to be metaverse to be? Metaverse, as I think, is a man-made world that is controlled by man, actually. But however, the problem is with this world is that it claims to be perfect. We religious people actually get used to living in imperfect world. And actually, religions and Christianity, at least, tries to find the recipe to overcome sin and the very fact that this world is imperfect. However, metaverse is a parallel world. And this parallel world sometimes tend to put religions aside, saying that this world is circular. And values which are actual in this world have nothing to do with the religious values, which are widely widespread in our contemporary, not virtual, but real world. First of all, I would like to say that imperfectness of the real world that we have around us, and everybody can actually test this imperfectness on his own skin, actually goes to the metaverse, to virtual world, which is being established by us people. As we believe us just to the opposite, the real world is created by God, as we believe as religious people. So I would say the way how we combine these two worlds in our mind is very crucial and important for us. My main idea is about values. How can we support values in real world and try to bring them to the virtual world, to metaverse? It is not a simple task. I would like to stress that our church actually tries to involve all technologists and particular in virtual world to actually for Christian testimony. However, it is very difficult to go through to the hearts of people. And of course, metaverse is a very material world. And as you know, even better than me, this world is actually directed by material value and material income. So it is very difficult to religious communities to testify about values. Here during your session, as I read, you are going to discuss not only, I would say advantages, but also disadvantages of the metaverses. And particularly you will discuss crimes that are being committed there. And I would like to say that these crimes, if we judge by the consequences of these crimes, are very severe because people sometimes can be actually deprived of their privacy. Our church throughout its history actually testified that privacy is very connected with freedom. If you are deprived of privacy sooner or later, you will be deprived of freedom. And freedom is a real value of the humanity that would be saved and protected everywhere. This is one thing. And the other thing for you to discuss is that our real humanity, I would say, humanity which get what’s used to living in the real world, not virtual, throughout its history, found a way to produce values and produce love. The most important value is love. And love is not a simple value to create and to establish. Love always grows in the context of family and the context of relatives, in the context of religious sacraments, if you are a religious person. All that is very, I would say, questionable in the world of metaverse. So, I think that here, and at this state of development of human race, we should think about values and how these values would be protected. And that should be our good will to go on the path of protecting these values. The other thing I would like to stress is that sometimes, and we all see that, people are being obsessed by virtual world, by metaverses. And this obsession, I would say, is very detrimental to the freedom and well-being of human personality. Again, we, humanity, are very just well acquainted with obsessions of different kinds. And obsession of the virtuality is a new kind of obsession. And so, if you want to somehow find a way to just fight this obsession, you should elaborate new approaches. And this is not a simple task. So, with that said, I would like to thank all organizers of this forum and wish you good luck in your discussions. If you, if it is possible, just, I am open to the questions that could come from your side. So, thank you very much indeed.

Moderator:
Thank you very much, Vakhtang. Thank you also that you had time to join us. And we would really encourage also our participants to ask questions, to engage, and hopefully you’ll stay with us during the whole discussion. I must also tell a little bit about our organization I represent and that hosts today’s networking session called Center for Global IT Cooperation. It’s in the think tank, which deals with questions of digital development, transformation, digital economy, internet governance, and all sorts of things digital. Recently, we have contributed an analytical report on the theme of metaverses to T20 within the format of G20. It was also dedicated to the ethical issues which arise during the development of metaverses, during the usage of metaverses, possibilities of crimes, abuse, and also opportunities which metaverses can provide in terms of education, in terms of healthcare, and all sorts of things which come with it. And I think that best way to elaborate on the positive side of metaverses would be to give a floor to someone who deals with them directly, works with projects connected to metaverses, NFT technologies, and metaverse-enabling technologies. Today we have with us our dear friend Daniel Mazurin, a young interpreter, entrepreneur, businessman, startup, our guru. Daniel, is everything all right? Do you have, yeah, you’re supposed to speak. Great.

Daniil Mazurin:
Awesome. Thank you so much, Alim. Long time no see. It’s a pleasure to be here. Always grateful for opportunities that you and you give us. So I would like to, from the technologist’s point of view and coming from the private sector, I’d like to start with one thesis. And this thesis is that we are living and looking at one of the most, if not the most interesting period of human history in terms of technological integration into our society and our daily lives. And I’m specifically talking about the artificial intelligence that we have to talk about today because metaverse tech and AI are, you know, is extremely connected. So I don’t know about, you know, technologies that we had in the ancient Egypt, forgotten technology, but in the modern society, I believe that AI, you know, plays a very big role. And we are already seeing a lot of users of chatGPT, right? AI model developed by the OpenAI. There are more than 180 million users daily, oh, sorry, monthly. And it was a stats in August. And metaverse technology, as I said, is very connected with artificial intelligence because we cannot develop a proper metaverse or virtual world or artificial intelligence or artificial world or augmented reality world without AI integration. So thus, I like to state that, you know, we’re living in a very, very interesting period of human history. And already, you know, we’ve already tested on ourselves how chatGPT influences our lives. And the same thing would be, I believe with metaverse technology. Right now, what we have on the market, and the market is not very bright right now, of course, because, you know, a lot of corporations and what’s stated in the description of the agenda, you know, a lot of corporations are stopping to develop metaverse tech. Why? Well, it’s, I don’t know about the directors of the corporations, but I can see a lot of startups, especially in the third world countries, developing metaverse projects, and they’re pretty successful. And they’re being bought by many businesses, and corporations. operations APIs are being used in terms of our technology, for example, on the in the business in the startup industry. So we’re seeing a lot of things going on. But we don’t see a real integration. Why? Well, personally, I think that modern metaverses are not what metaverse should look like metaverse should combine not only virtual worlds with, you know, VR Hamlet that you have right now VR glasses by meta and other corporations, metaverse should combine real life too. And we can combine real life with virtual world using AR technology. You’ve probably heard and seen recent news about Rayban and meta AR glasses. This is one of the biggest AR and metaverse products for integration into into this society. It’s a, you know, brand new Rayban and it’s very good for youth and it’s very cool. So I believe by making such mass adoption products will be able to integrate this tech into our society. Well, why? Well, yeah, regulation is, is a must, right? It is needed. We’ve seen what happened in the crypto industry for the past two years, a lot of scams. People lost a lot of money. So such industries, innovative industries should be regulated, of course. I’m not talking about the US regulation when you have to ban a lot of companies. I’m not saying about Chinese regulation, where you just ban every, every technology to develop by your own. I’m saying about good regulation, where you give an opportunity for businesses to thrive, give an opportunity for startups to properly make money and improve, improve the technology. And this, this tech, like metaverse technology, VR, AR, AI, should be regulated, first of all, in the third world countries, where this tech, innovative, innovative tech gives opportunity to increase GDP, to increase quality of people’s lives. And overall, just make a very cool implementations and, and make a future in this, in this countries. So yeah, I’m not a long speech. Overall, I would, I like to say that the metaverse that we’ll see in the upcoming years, is not the metaverse that we have, like what Mark Zuckerberg is building, or what we have in the, in the blockchain space, like sandbox or the central end. These are not metaverses that will be mass adopted. metaverse will be a combination of VR, AR and AI technologies. And specifically, if we’re talking about AI, it’s already being used for, for example, integration of AI into NPCs, right in gaming and virtual worlds, or even in augmented reality, in terms of GPS mapping and creating immersive experiences, automatically with artificial intelligence for AR glasses, or just AR applications on on on our smartphones. So yeah, I think I think this is it for me.

Moderator:
Oh, so Daniel, I have a brief question for you. So what do you think about such a thesis that, of course, taking into account all the positive sides that metaverse enabling technologies can provide, for instance, let’s say, in corporate, corporate education formats, even in space, like in the sphere of auto piloting, and etc. Could there potentially be some structural disadvantages, you have talked briefly about developing countries, and I can clearly see the problem that, of course, metaverse enabling technologies are amazing. They are very aspiring and great, but we should acknowledge that they developed only in countries with high rate of IT development, GDP and etc, mostly Western European countries, could there potentially be a situation with structural disadvantage, where the developed world already has access to such technologies and uses it, and the developing world once again, has to try to reach their level of technological sovereignty and metaverse connectivity and is unable to do so simply because of the structural differences, what should be done about it? Should this question be addressed as well? Yeah, absolutely. I think that’s that’s a great question. And that’s a great statement from you. Because there is an absolute structural disadvantage nowadays in terms of technology creation from the West, right and from China. And that’s why I said that, you know, first countries that should properly regulate and give opportunities for startups to thrive and to build products should be third world countries. Yeah, of course, there is a big advantage of the of the United States and Europe in terms of technology and in terms of technological resources. But you know, a lot of things are changing nowadays. And that’s why third world governments should regulate innovation, firstly, while the other other countries are trying to regulate and they have other interests in their hats. So yeah. Thank you very much. So maybe there are some ideas from the audience, or I also see that we have some quite a number of around 20 people online, I would really encourage anyone to raise their hand and ask a question or maybe propose a certain idea of their own. Yeah, clearly can see a gentleman over there. I do have a mic in the you do have Yeah, there’s a microphone.

Audience:
Thank you. So just a few things that during COVID, it acted as a catalyst for so many different digital platforms to come up and it showed us some of the value proposition of metaverse. But as you know, that the standardization process is still going on. The interoperability issues are there. There have been certain projects like for example, digital immortality, people have been trying to, you know, if you have a digital avatar, and it can basically learn about a certain real life person. And if that person is no longer there, then the avatar lives on and how accurately it can mimic a certain person real person. So there are certain advantages of using metaverse. My question is that, for example, now we have generative AI, there are talks about regulation, and how AI content is going to be taken, for example, whether it would be acceptable in certain areas or not, there are so many different platforms, which in which you can use chat, GPT, mid journey, you can create so many different types of content. In the metaverse. My question would be that how important the role of regulation would be to ensure that the value proposition of metaverse is so much so that it continues to grow, and it offers a lot of opportunities for people in different countries. Thank you.

Alina :
Yes, I can I take like a word for Chanel, I will take your word for you. So actually have a very good think about regulation. Yeah, there is a big reason why metaverse and not regulate is because we’re not understand in which jurisdiction they actually operate. So it companies made metaverse. So where do they actually exist? So basically, some people think that metaverse like the first step to the digital state. So can they afford digital citizenship to a person who actually in the metaverse and if person are in many metaverses, it’s like he has like double citizenship or business ship or three countries. So the question is, do we need to regulate metaverses or the IT companies, and maybe create some kind of a framework for the whole metaverse conception and DLT technologies, because we don’t have like, still wouldn’t have regulation, even on the financial market of things like DLT, cryptocurrencies, they still exist somewhere in the internet without particular jurisdiction without country without everything. So we can’t, we actually not decided if the IT companies just operating social networks have regulations apart from the country they registered in. And this is, of course, a very difficult question. And maybe we’re just in the first step of this. So apparently, metaverse can give you the digital immortality, because it’s kind of a digital prison for people that are not longer with us, I can. So actually, you’re right important, but I don’t think that there is a answer yet for the question.

Audience:
Thank you. Just a couple of other things that I would like to mention is that since you mentioned digital citizenship, that also raises the question of privacy related issues, jurisdictional issues, which law is going to be applied on whom that is also a big problem that needs to be resolved. So thank you. Thank you for the story. My name is just to contribute to that. I think there is actually a partial answer to some of these questions. It depends on the use case in the metaverse. So if it’s anything related to personal data, there is actually regulation that already applies around digital identities, electronic signatures, payment, interoperability standards, but that’s a public sector use case. So again, also with the hosting of data, as soon as it’s personal data, there is regulation that governs that. Whether or not it’s cloud or the metaverse, that doesn’t matter. So there are actually pieces of legislation that are already applicable to the metaverse depending on the use case. Yes, there is some Wild West elements around NFTs and gaming, et cetera, et cetera. But if you look at it from a US context, a Chinese context or European Union context, there is actually legislation in place that governs key elements of the metaverse, whether or not you use it or not. So just a little bit.

Moderator:
Thank you for the brief contribution. I should just give a word to Daniel Mazurin, and then we’ll give a brief word to Vakhtang.

Daniil Mazurin:
So yeah, I would just like to add to Alina in the question that, you know, there is no problem with regulation at all. What Alina said, I kind of disagree with that, because most of the companies and startups building metaverses, they are incorporated in some countries, even crypto companies, they’re incorporated, usually like in Hong Kong right now, or in Seychelles. So that’s not the problem of regulating and really coming to these companies. The real question, I think, how we should really regulate them? Do we need to give them a full freedom of actions? Or do we need to really look after them and see how it goes in the crypto or AI or metaverse, because they can influence, you know, Gen Z and, you know, destroy the world, etc, etc. I think so. And the question that was asked is, how we should regulate properly AI in the metaverse? Well, I think that, you know, AI is already being regulated, and all the companies that are building AI, they already make their auto regulation by their own, because if we’re talking about the open AI, the biggest right now a company, they essentially make their, their, their code regulated. So you cannot, for example, generates 18 plus contact using their generative AI, or you cannot ask certain questions, or get answers on certain questions that are related to some specific topics. But that could go wrong, right? So they can essentially delete this auto regulation of the of the AI. And that’s the real problem. That’s why governments should properly regulate them because AI is dangerous. We have to realize that if it goes beyond the open AI servers, or something, something else, then could turn into a big issue to to the to not only the company, but the humanity in general.

Moderator:
Thank you, Daniel. Let’s give a brief word to what time as well he raised their hand.

Vakhtang Kipshidze:
Thank you so much. It is very remarkable that there are people here who actually raised question about digital immortality. And here being a representative of religious organization, I would like to say that we should be very careful dividing religious issues and technological issues. If at some stage of development of technologies, religious issues such as immortality, and non religious issues like, you know, technological progress are being mixed. So I think I think it is a big danger, it I think sets a big danger for humanity because, you know, if people believe in immortality, it is a good thing, if they can, as they think, get this immortality now and just do some technological procedures, I think it is a very big challenge because we cannot just bring a space of dogmas to the space of technology. In that case, I would say the whole just structure of human personality could be endangered because at some stage a person will not understand whether he or she has body or doesn’t have it. It is, I think, a very crucial issue just to see a difference between real world and virtual world. And sometimes, and the issue of immortality shows it, I think, in good sense, this just mixing is very just seizable. Thank you so much.

Audience:
Hi, I’m not, I don’t know much about metaverse. I was wondering what is the bottleneck for the, for the, let’s say, the spread, like the growth of metaverse right now, if it is technical. And if it is technical, would it be like lag or delay in the connections, one of the big challenges or not?

Daniil Mazurin:
Yeah, that’s a great question. And I actually wanted to say one very important thing and answer this question. The real bottleneck in the metaverse creation and expansion are essentially processors because you cannot really download a crazy world online and to live in it and to communicate with other avatars, other people in this downloaded in this downloaded online world, right? So there is, this is the main technical issue. For example, if you open right now, such metaverse as the central end, and you don’t have an MSI computer, for example, gaming computer, your notebook will, your computer will be very, very, I would say, hard to process things and it will be very low. So this is one of the issues. And if we’re talking about the VR, for example, right, this VR glasses, it’s also very low. And it’s very connected to the development of Unreal Engine and Unity Engine. So a lot of, a lot of things of the matters depend on this infrastructures. And right now you can see a whole new upgrade from Unreal Engine on how you see things with using Unreal Engine, for example, you will be able to literally see every and each detail that was animated, right? So yeah, there are bottlenecks, but sooner or later, we’ll see developments from this engines, we’ll see development from computer processors. So yeah, sooner or later it will happen.

Audience:
Good morning. My name is Claudio Agosti and I’m a platform auditor. Although I welcome the existence of regulation, I also believe that cannot be seen as the solution that will guarantee us safety. Because regulation is the output of lobbying, because regulation need to be abstract while the violation is more concrete and precise. And in the past years, we saw that the only way to investigate on platform misbehave was to have a researcher that were developing their own technology to collect evidence, study the algorithm, study the platform, and then keep them accountable to data protection authority, to media reporting, or to government reporting. So the question that I believe is more for Daniel is, would you allow, for example, in your tool that every user that is having an experience can save a log of what is happening? And would you accept that this log will be used to actually keep yourself accountable or at least to raise question on why a system behaved a certain way? Because at the end, all the experience that a person get is individual, depend from the algorithm that will not repeat their own behavior in the future, depends on other contextual element that will never be repeated. So only having an evidence of what has happened, a log, a video, can allow to a person that suffered something to ask for explanation or for attribution. Thank you.

Daniil Mazurin:
Could I ask a question to a question? Is this the question related to would I allow my platform or tool to be audited or regulated?

Audience:
That is normally defined by regulation. For example, if your platform need to run in a sandbox, if you need to document if it’s high risk, low risk, that is unavoidable. What I was asking is something more. Because normally the regulation can let you can let you certify your tool, but then the problem is never in the tool itself, but in the experience of your users. Are they the ones offering this information or suffering harassment, et cetera? And if there is a log, that is an individual log of your experience, that at least can allow the user to ask a further question or to also offer you feedback to improve the tool. Yeah, absolutely.

Daniil Mazurin:
That’s that’s a great question. And I’ve personally communicated with a lot of auditors, platform auditors and smart contracts auditors in the space. Yeah. You know, that’s that’s a question of UI UX. Right. So it’s always better to skip the Q&A during the onboarding process to your tool. Right. It’s always better. But you will never know the data of the of your users. Right. It’s always better to skip, I don’t know, authorization of the user because it’s long and it’s not very useful for the user because the user wants to get in touch with your product as soon as possible. He or she doesn’t. The users don’t want to essentially register and go through all this process. So, yeah, but it’s it’s it’s a you know, it’s essential nowadays, even even in the crypto space right now. You know, it’s it’s it’s essential to know who is your user or what’s what wallet the user has. Right. You you essentially collect the information, the real issue, how you use this information, because you can you can you can get rid of frauds on your platform when you know your users. Right. Or you can use user data to manipulate users and sell this data. It’s you know, it’s it’s it’s the issue of how you use users data rather than do you need to collect data? Yeah.

Audience:
We have one more question. Sorry, if it’s OK, I’ll just I just have two questions, actually, one from Daniel, and I just need to know a religious perspective on since we already discussed digital immortality. So I’ll start with digital immortality. What happens right now is that we have a lot of content on the Internet and everybody who’s online, they leave a digital footprint, you know, even after they’re dead, the content remains on different platforms. Take the example of YouTube. There are so many different lectures available from so many people. There are documentaries. You can see so much content about people. The only difference that I see with Metaverse is that with digital immortality, that kind of content. I mean, if there is a digital avatar of somebody, you can interact with that digital avatar with the content that we have right now. It’s not interactive. If there is a video on YouTube, you can’t really interact with that person. So from the religious point of view, let’s also consider AI regulation. You I mean, AI should not be discriminatory based on religion, race, all those factors. So when the regulation is there, and when you talk about digital immortality and somebody’s digital avatar lives on, so I just want to know that why can it be considered a bad thing? It can have so many advantages for the people who are related to somebody whose digital avatar lives on. The second question is from Daniel regarding the value that Metaverse has to offer. So let’s talk about, for example, digital assets. There was an article from McKinsey which estimated that by 2030, the Metaverse is going to be worth $5 trillion. And there were so many reasons. One reason was scarcity in the real world. So you have limited resources in the Metaverse. The digital assets, there is no limit virtually. So with generative AI, now we can compare that in the real world we have scarcity, but in the Metaverse, there is going to be abundance of everything. Now with generative AI, you can generate digital assets. There are so many tools through which you can generate digital assets, and a lot of people are doing that. So won’t that reduce the overall value of digital assets because you have scarcity in the real world, but you have abundance in the Metaverse, which in economic terms, abundance basically, in some cases, is not good. It reduces the value of assets. So these are my two questions. Thank you.

Daniil Mazurin:
Yeah, I can also add a little bit about the immortal thing, immortal avatars or immortal digital, your digital persona. But let’s start with the second question. So first of all, you have to realize that in the Metaverse, in the ideal, let’s say, type of Metaverse, you will essentially own your items and assets. So there’ll not be an unlimited supply of items and assets, what can be produced. Of course, if we’re talking about the generative AI right now, let’s say the utility and the price of 3D rendered artworks have been recently declined because right now you don’t need to hire a 3D artist. You can go just to generative artificial intelligence and make your own art. But it’s still, you can consider this as something that creates unlimited supply, but also you can consider it as a tool. Because right now, a lot of 3D artists, for example, they use generative AI to generate pictures and they add their art onto it and it becomes even brighter and more beautiful. So yeah, back to the Metaverse thing. There is still, will not be unlimited supply of assets and items because, you know, it’s supply and demand. You have to sell NFTs, for example, if we’re talking about blockchain-based Metaverse, you have to sell NFTs, have to sell NFT lands or, you know, your clothes or your avatar. So there’ll always be a limited supply in order to create the demand. So yeah, that’s in short. So in terms of immortality issue, I strongly support this question and I believe that there is a future in that. I truly believe that creating an AI for your relative who has died, unfortunately, can be a good thing, but we, essentially, we cannot go too far with that because I don’t think that in terms of religious point of view, that’s a moral thing to do, right? So there will always be such issues. But in terms of people who are willing to do this and who don’t have any religious, you know, bottlenecks or who are not religious or whose religion allows them to do so, then why not do this, right? Because you can always communicate with the person who is very important to you and you’ll be able to do that, right? So yeah, thank you so much for your questions. Very, very interesting questions.

Moderator:
So thank you very much, Daniel, Vakhtang, and our dear colleagues. I think that time is running out, as our colleagues have already shown us. I thank everyone for involvement in the discussion and I also would like to invite you to, after the session discussion, if you have any questions, we’ll be glad to talk in private. And also, a small notice. Tomorrow, our organization organizes a soiree. And we would also love to invite all of you to partake. We’ll give more precise information after the session. Yeah. Thank you very much, all of you.

Alina

Speech speed

177 words per minute

Speech length

284 words

Speech time

96 secs

Audience

Speech speed

164 words per minute

Speech length

1391 words

Speech time

508 secs

Daniil Mazurin

Speech speed

138 words per minute

Speech length

2194 words

Speech time

951 secs

Moderator

Speech speed

162 words per minute

Speech length

1125 words

Speech time

416 secs

Vakhtang Kipshidze

Speech speed

125 words per minute

Speech length

1127 words

Speech time

542 secs

The State of Global Internet Freedom, Thirteen Years On | IGF 2023 Launch / Award Event #46

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Emilie Pradichit

Southeast Asia is currently facing significant challenges due to the presence of authoritarian regimes that employ cyber laws to target individuals who express dissenting views or defend human rights. These regimes often exploit the concept of national security as a pretext for suppressing freedom of speech and violating human rights. For instance, in several ASEAN countries, such as Thailand, Cambodia, Vietnam, and Myanmar, there are concerns about the lack of freedom, as highlighted by the Freedom of the Net report. In Thailand, the situation is particularly severe, with a human rights lawyer, Arnon, facing the possibility of 210 years in jail for advocating for reforms within the monarchy.

Another concerning development in Southeast Asia is the misuse of artificial intelligence (AI) for surveillance and content moderation by governments in the region. These practices have resulted in privacy violations and infringements on individual freedoms. Governments are increasingly regulating tech companies to ensure the enforcement of their laws. Notably, the Thai government has passed a decree obligating tech companies to remove content deemed a threat to national security within 24 hours. Additionally, AI has been misused for facial recognition surveillance, raising concerns about privacy and potential abuse of power.

Emilie Pradichit advocates for rights-respecting regulatory frameworks and holds tech giants accountable for the misuse of their platforms. She calls for the implementation of the United Nations Guiding Principles on Business and Human Rights (UNGPs) and the Organisation for Economic Co-operation and Development (OECD) guidelines for multinational tech companies. Pradichit suggests that tech giants should be held criminally and civilly liable for any harm caused by their platforms. She points to the Rohingya crisis and the use of platforms like Facebook to propagate hate speech against the Rohingya people to illustrate the urgency of her arguments.

The Freedom Online Coalition (FOC), which is primarily known among digital rights and online freedom groups based in Washington DC, lacks visibility and accessibility, especially among people from non-Western countries. To amplify its impact, FOC must work towards increasing awareness and engagement beyond its traditional base. This would involve conducting stakeholder engagements not only in Washington DC but also in other regions. Unfortunately, visa restrictions often hinder engagement with the global majority, making it difficult for individuals from these regions to travel to Europe or the United States.

Furthermore, FOC’s role becomes particularly crucial in light of the many elections scheduled worldwide for 2024. Civil society groups anticipate FOC to release statements targeting authoritarian governments and the private sector to safeguard democratic processes and protect human rights.

To effectively combat authoritarian governments online, FOC should invest in civil society and provide financial support to organizations fighting against digital dictatorship. Financial constraints often limit the abilities of these groups to engage in advocacy and carry out essential work.

Aside from these specific challenges, there are concerns about the local Data Protection Act in Thailand. While the government claims to have developed the Act by taking inspiration from the General Data Protection Regulation (GDPR) in the European Union, there are issues regarding effective oversight and remedy. The Act includes government-led exemptions that allow violations of data under the guise of national security.

Another aspect that deserves attention is the lack of dialogue and understanding of the local context in global exchanges. It is crucial for international diplomats and institutions to have a comprehensive understanding of the practices followed in each country to foster more effective collaborations and mutual understanding.

The overarching theme throughout these discussions is the importance of respecting and implementing international human rights law. Emilie Pradichit insists that civil society does not oppose international human rights law but rather desires governments to adhere to these principles. Concerns are raised about the ease with which governments deceive international institutions by creating an appearance of compliance with international standards.

In conclusion, Southeast Asia faces numerous challenges related to authoritarianism, cyber laws, and the misuse of AI. To address these issues, there is a need for greater awareness and engagement with organizations like the Freedom Online Coalition. Additionally, it is crucial to hold tech giants accountable, invest in civil society, strengthen data protection laws, foster meaningful dialogue, and promote the implementation of international human rights standards. These efforts are essential for safeguarding human rights, protecting privacy, and upholding democratic processes in the region.

Audience

During the discussion, one of the main points highlighted was the confusion surrounding the support mechanisms for online activists who are under threat. The speaker mentioned their ability to provide support for these activists, but there seems to be a lack of clarity on the specific services offered in different jurisdictions. To address this, an audience member sought clarification on the support services available in various legal contexts.

Allie Funk, who leads a team of seven people, stressed the importance of collective work and making tough decisions. This indicates that her team understands the challenges and complexities involved in supporting online activists who face threats. It shows their dedication to their work and the need for collaboration in achieving their goals. The audience showed gratitude towards Allie Funk for her closing remarks, indicating that her insights and perspective were valued.

One noteworthy observation from the discussion is the mention of SDG 16, which focuses on peace, justice, and strong institutions. This indicates the connection between the support for online activists and the broader goals of promoting justice and ensuring the protection of human rights. The speaker’s ability to provide support aligns with the goals of SDG 16.

Overall, the discussion shed light on the confusion surrounding support mechanisms for threatened online activists. It emphasized the importance of collaborative efforts, tough decision-making, and acknowledging the hard work of those involved in supporting these activists. The audience’s gratitude towards Allie Funk indicates the impact of her closing remarks and the appreciation for her insights. Moving forward, it is crucial to address the confusion surrounding support services and ensure a clear understanding of the resources available for online activists in different jurisdictions.

Guuz van Zwoll

The European Union (EU) has implemented regulatory laws, including the Digital Services Act (DSA), Artificial Intelligence (AI) Act, and Digital Markets Act (DMA), through extensive multi-stakeholder engagement. The General Data Protection Regulation (GDPR) has also been rolled out by some companies across all countries. These regulatory laws, such as the DSA, AI Act, and DMA, have received positive sentiment for maintaining a balance between strong regulation and protection of human rights. They include transparency clauses and an appeal process for comments removed.

The Netherlands is committed to promoting the principles of the DSA, AI Act, and DMA. They have released an English translation of the Dutch International Cyber Strategy, urging other countries to adopt these EU regulations and implement associated human rights and democratic clauses. The Netherlands focuses on inclusive internet governance, integrating cyber diplomacy, digital development, and human rights work.

In addition, the Netherlands incorporates the multi-stakeholder model into internet governance, emphasizing digital security, governance principles, and digitalization in all their initiatives. They prioritize civil society engagement, running programs like the ‘Safety for Voices’ program to include diverse perspectives in governance decisions.

The Netherlands also supports human rights defenders and digital defenders at risk through initiatives like the Digital Defenders Partnership. They provide support in legal aid, physical protection, digital security, and psychological well-being. Transparency is a key component of the Netherlands’ global governance approach, advocating for the inclusion of global majority countries and multi-stakeholder involvement to protect human rights.

In summary, the EU’s regulatory laws, such as the DSA, AI Act, and DMA, strike a balance between strong regulation and protection of human rights. The Netherlands actively promotes these laws, advocating for their adoption and implementation of associated human rights and democratic clauses. They prioritize inclusive internet governance, incorporating cybersecurity, digital development, and human rights work. The Netherlands also supports civil society engagement, human rights defenders, and emphasizes transparency in global governance to protect human rights.

Olga Kyryliuk

Over the past decade, the field of internet freedom has witnessed significant changes and developments. Previously, topics like cybersecurity were widely perceived as unimportant and lacked understanding. However, there has been a notable shift in recent years, with cybersecurity garnering more attention and recognition. This growing awareness can be attributed to increased public understanding and recognition of the importance of internet freedom and digital rights.

The advancement of technology, particularly in the areas of artificial intelligence (AI) and blockchain, has brought about both new opportunities and challenges. While these advancements have pushed the boundaries of safety and security, they have also raised concerns about potential threats. The risks and challenges associated with AI and blockchain technologies are a cause for concern, reinforcing the need for robust regulation and safety measures.

In addition, a troubling trend of digital authoritarianism has emerged, characterized by internet shutdowns, content censorship, and the unregulated use of surveillance technology. Instances of internet shutdowns have increased globally, leading to a limitation of free expression and access to information. Moreover, the lack of effective regulation of private tech companies and tech giants has further exacerbated these issues. The use of mass biometric surveillance systems without proper legal safeguards is also on the rise, posing a threat to privacy and civil liberties.

To address these challenges, it is crucial to foster continued collaboration and dialogue. Concrete initiatives and partnerships, rather than just talk, are needed to tackle the growing threats to internet freedom. By engaging stakeholders from various sectors, progress can be made in tackling the complex issues surrounding internet freedom and digital rights.

Furthermore, the engagement of civil society in initiatives such as the Freedom Online Coalition (FOC) is of utmost importance. The involvement of civil society can provide valuable insights and perspectives in shaping policies and decision-making processes. Olga Kyryliuk, who leads an influential internet freedom project, stresses the need for better civil society engagement within the FOC. This can be achieved through periodic consultations on specific thematic issues, allowing for an open exchange of ideas and feedback.

The importance of regional and national communities cannot be overlooked in promoting internet freedom. The FOC should prioritize working with these communities and foster connections and partnerships between them. By bridging the gap between governmental representatives and regional communities, the FOC can play a pragmatic role in facilitating dialogue and collaboration.

However, the current state of the global digital compact and the Freedom Online Coalition calls for improvement. Civil society feels frustrated due to a lack of clarity and engagement opportunities. This restricts the meaningful participation of implementing partners in shaping policies and decision-making processes. It is crucial to establish clear venues and mechanisms that allow for effective engagement and collaboration.

Finally, it is important to exercise caution when adopting regulations from other regions, such as the European Union’s General Data Protection Regulation (GDPR). While these regulations may be seen as ideal, they should not be adopted without proper understanding and adaptation. Countries that directly implement GDPR as their national law have faced challenges during the enforcement phase. Therefore, dialogue and conversation with national legislators, as well as capacity building, are essential for the successful adoption and implementation of such regulations.

In conclusion, the past decade has witnessed significant changes in the field of internet freedom. While there has been progress in raising awareness and understanding, challenges remain in ensuring the safety and security of the digital space. Collaboration, engagement of civil society, and the development of concrete initiatives are crucial in addressing these challenges and protecting internet freedom and digital rights.

Oliver

Oliver expresses concern over the lack of transparency displayed by the Freedom Online Coalition (FOC) in their dealings with UNESCO guidelines. He argues that the FOC needs to be more open and transparent about their actions, implying that they may not be acting in the best interests of promoting freedom of expression and human rights in the digital space.

Furthermore, Oliver raises an additional concern about UNESCO’s guidelines, specifically focusing on the potential promotion of authoritarianism in the digital sphere. This highlights his worry that these guidelines may inadvertently facilitate the rise of oppressive regimes online. Both Oliver and the speaker share a negative sentiment towards these issues.

However, the summary lacks supporting evidence or specific examples to substantiate these concerns. Without further supporting facts or arguments, it is difficult to fully understand the basis for these apprehensions. Including additional evidence or examples would strengthen the arguments made by both Oliver and the speaker.

In conclusion, Oliver calls for increased transparency from the FOC regarding their dealings with UNESCO guidelines. He suggests that the FOC’s actions should be more transparent and urges them to openly share information. Additionally, Oliver expresses worry about UNESCO’s guidelines potentially promoting authoritarianism in the digital space. These concerns highlight the need for careful consideration and vigilance in protecting freedom of expression and human rights online.

Allie Funk

Internet freedom has been experiencing a steady decline for the past 13 years, marking 2023 as another year of regression. According to the assessment conducted by Freedom House, attacks on free expression have become increasingly common, with individuals being arrested for expressing their views in 55 of the 70 countries under review. Furthermore, governments in 41 countries are actively blocking websites that host political, social, and religious speech. These developments have contributed to a negative sentiment surrounding the state of internet freedom.

The crisis has been further exacerbated by advancements in artificial intelligence (AI). The rise of AI has led to intrusive surveillance, censorship, and the proliferation of disinformation campaigns. Generative AI technology has been misused in 16 countries to distort information, while 22 countries have instituted requirements for companies to deploy automated systems that censor speech protected under international human rights standards. These factors have contributed to a growing negative sentiment towards the impact of AI on internet freedom.

To address the urgent need to protect internet freedom, there is a call for the regulation of AI. The key argument is that regulation should not solely rely on companies, but rather center around human rights standards. It is important to increase transparency and understanding of the design, use, and impact of AI systems. The positive sentiment towards this argument reflects the belief that appropriate regulation is necessary to safeguard internet freedom.

In addition to regulation, there is a push for the inclusion of civil society in the AI regulation process. Currently, civil society is being left out in the race to regulate AI, leading to concerns about a lack of diverse perspectives and potential biases in decision-making. Emphasizing the need for involvement from global majority civil societies, this argument holds a positive sentiment.

Despite the challenges posed by AI, there is recognition that it can also contribute to bolstering internet freedom if designed and deployed safely. AI has the potential to help individuals evade government censorship and facilitate the detection of disinformation campaigns and human rights abuses. This positive sentiment signifies the belief that AI can be harnessed as a tool to protect and enhance internet freedom.

However, it is essential to avoid overshadowing long-standing threats to internet freedom by solely focusing on the regulation of AI. The neutral sentiment surrounding this argument highlights the need to maintain momentum in addressing broader issues related to internet freedom.

The European Union (EU) has emerged as a global leader in internet regulation. Bridging the gap between the Chinese model and the US laissez-faire approach, the EU has enacted significant legislation such as the General Data Protection Regulation (GDPR), which serves as a model for global data protection laws. The Digital Services Act and the EU AI Act are further examples of the EU’s commitment to internet regulation, earning positive sentiment and demonstrating their efforts to protect internet freedom.

The impact of internet regulations on human rights varies depending on the rule of law standards in each country. The sentiment surrounding this statement is neutral, emphasizing the need to consider the context in which internet regulations are implemented and their potential effects on human rights.

Governments have a crucial role in protecting internet freedom and ensuring meaningful multistakeholderism. For instance, the Netherlands is exploring strategies that merge cyber diplomacy, digital development work, and human rights aspects to safeguard internet freedom. Programs like Safety for Voices support human rights defenders and civil society organizations through digital security measures. This positive sentiment highlights the importance of government involvement in protecting internet freedom.

Lastly, multilateral bodies such as the Freedom Online Coalition can play a vital role in reversing the decline of internet freedom. Comprised of democratic governments committed to protecting internet freedom, the coalition serves as a platform for collaboration and advocacy. The sentiment towards this argument is neutral, acknowledging the potential impact of multilateral efforts.

In conclusion, internet freedom has been on a decline for the past 13 years, with attacks on free expression and website blocking becoming more prevalent. AI advancements have intensified the crisis by enabling surveillance, censorship, and disinformation campaigns. To protect internet freedom, there is a need to regulate AI, involve civil society in the decision-making process, and ensure good governance centered on human rights standards. However, AI also has the potential to enhance internet freedom if used responsibly. The EU has been at the forefront of internet regulation, but the impact of regulations on human rights varies across countries. Governments play a crucial role in protecting internet freedom, and multilateral bodies can assist in reversing the decline. Overall, it is essential to navigate the complexities of internet freedom and strike a balance between regulation and broader challenges.

Lisa

During stakeholder consultations conducted by Lisa, a representative of USAID, in various countries, a common concern emerged: dissatisfaction with existing international models of digital regulation. This sentiment has triggered a demand for a different approach, a third-way framework for digital rights that goes beyond the risk-based European model, the laissez-faire American model, and the state-based model adopted in China.

Stakeholders, particularly in countries that make up the global majority, expressed a desire for a digital regulation framework tailored to their specific needs and circumstances. They see the necessity of finding a middle ground to address the challenges faced by their nations.

The implementation of the General Data Protection Regulation (GDPR) and similar regulations, specifically in countries with different income levels and limited oversight capacity, has been perceived as onerous. This concern stems from the difficulties these countries face in fully implementing and complying with such regulations. Additionally, there is a noticeable lack of political will and politicization of some oversight bodies, further complicating the effective execution of digital regulations.

In light of these observations, there is a need for a broader conversation on what human rights protections and safeguards should look like in different contexts. Instead of imposing a one-size-fits-all approach, there should be an exploration of context-specific digital human rights protection and safeguards. This approach acknowledges the diversity of countries and their varying levels of development, eliminating the potential burden of regulations that may not align with their specific needs and capacities.

Overall, Lisa’s consultations highlight the dissatisfaction with current international models of digital regulation and the need for a third-way approach that considers the unique circumstances of each country. The difficulties faced in implementing GDPR and similar regulations also call for a more nuanced and flexible approach to digital rights. Engaging in a broader conversation on context-specific human rights protections and safeguards allows stakeholders to work towards a digital regulation framework that respects the rights of individuals while accommodating the realities of different countries.

Jit

Jit attended a United Nations conference with the intention of obtaining a deeper understanding of the global digital compact and seeking various perspectives on its merits. Jit approached the topic with a neutral stance, indicating an open mind and a desire to gain further insights. Specifically, Jit was interested in exploring the potential advantages and disadvantages of the compact.

During the conference, Jit actively participated in the discussion and initiated the topic of the global digital compact. This demonstrated Jit’s eagerness to engage with others and foster a robust conversation. The conference setting provided an ideal platform for an informed and constructive dialogue on the subject.

The focus of the conversation revolved around the impacts that the global digital compact could have on industry, innovation, and infrastructure, as outlined in the 9th Sustainable Development Goal. This goal aims to promote sustainable and inclusive economic growth by fostering technological advancements and improving infrastructure.

Jit’s neutral stance allowed for an unbiased examination of the global digital compact. By requesting insights on both the positive and negative aspects, Jit sought to gain a well-rounded understanding of its potential impact. This approach reflected Jit’s commitment to considering all perspectives before forming an opinion.

While the exact details of the arguments and evidence presented during the discussion are not disclosed, it can be inferred that the conference attendees shared their specific viewpoints and provided relevant information to support their claims. By facilitating an exchange of ideas and opinions, the conference allowed for a comprehensive analysis of the global digital compact.

In conclusion, Jit’s attendance at the UN conference on the global digital compact offered valuable insights into the topic. By adopting a neutral stance and actively soliciting perspectives, Jit exhibited a genuine curiosity and a commitment to exploring both the benefits and drawbacks of the compact. The conference setting enabled an informed and productive discussion centered around the impact of the compact on industry, innovation, and infrastructure, in line with SDG 9.

Session transcript

Audience:
I’m going to give it another minute or so. I know some other sessions are letting out. And then we’ll just get started. Thanks for joining us. We got you. We’ll take what we can get. All right. We’ve got like a whole workshop plan, so we’re going to Brussels and The Hague, where we’re doing events in both of those. Yeah, we’ll go home for a couple days. And then I’m taking a few days off there. So we’re going to have like a week. That’ll be nice. I’ve never flown with him. He’s big, right? Yeah. He’s like 45 pounds. I think I would be way too stressed out to have him down there. You know? So. Yeah. Yeah. Yeah. Yeah. Yeah. Okay. Let’s get going. Okay. Are we ready? How many people have a free call tomorrow? Oh, we’re good. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Okay. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. In If you needed $34 per month , you could pay $14 per month

Allie Funk:
. . . . . . . . . . . . . . . . . get started. Thanks everyone for joining us. My name is Ali Funk. I’m Freedom House’s Research Director for Technology and Democracy. We’re really excited to host this conversation amongst three really brilliant folks that have taught me a lot about this field. What we’re gonna do today is I will give a very quick quick overview of Freedom on the Net and explain what that report even is. Then we’ll dive into an interesting conversation with these folks up here about Internet freedom, how it’s changed over the past decade, where we’re going. And then I’ll open it up to y’all. We’re a small group so I hope we can get nitty-gritty in the issue area. So first let me just have you all introduce yourself. Olga, why don’t we start with you?

Olga Kyryliuk:
Hi everyone my name is Olga Kyryliuk I work as a Technical Advisor on Internet Governance and Digital Rights at

Emilie Pradichit:
Internews. Hi everyone, my name is Emily Palamy-Praditit. I’m the founder and Executive Director of the Manu Shaya Foundation. We are a feminist human rights organization based in Thailand, working mainly in Laos and Thailand. And we work at the intersection of digital rights, corporate accountability, and access to justice for local communities.

Guuz van Zwoll:
Good afternoon everyone, my name is Guuz van Zwoll. I work with the Dutch Ministry of Foreign Affairs on digital

Allie Funk:
human rights. Thanks gang. So what is Freedom on the Net? It is Freedom House’s annual assessment of Internet freedom in 70 countries around the world. We look at how easily can folks access the Internet, what does the Internet look like in their countries, and are their rights protected or violated by the state, by non-state actors, by companies. Just last week we launched the 2023 edition of the report, the 13th version of it, and I’m just gonna give you some of the top findings. If you want to read the full report, which I would urge you to do, we have some fun graphics, we have country reports written by these folks up here at freedomhouse.org, but some just quick key findings that I think will ground our conversation today about where we are in the internet freedom space. 2023, 13th consecutive year of decline for internet freedom. Hopefully next year I’ll have the first year of improvement in internet freedom. Doesn’t seem like it, but you know, girl can hope. Attacks on free expression grew more common around the world. Like I said, we’ve been doing this for 13 years, and each year we have another record high of governments assaulting the fundamental right to free expression. So in at least 55 of our 70 countries we covered, people were arrested for simply expressing themselves. We had a record high of 41 governments in which their regulators blocked websites hosting political, social, and religious speech. And this year what we really zoomed in on is how advances in artificial intelligence are deepening the crisis for internet freedom. So you know, we looked at three different ways that’s happening. It’s AI is driving intrusive surveillance, empowering censorship, and also contributing to disinformation campaigns. So the two specific deep dives we did is first about how the affordability and accessibility of generative AI technology is lowering the barrier of entry for disinformation for the disinformation market. So we found that generative AI tech was used in 16 different countries to distort information on political or social issues, often during times of crisis like elections, protests, and other, you know, conflict areas. And then second we looked at how automated systems are enabling governments to conduct more precise and subtle censorship. So we found in at least 22 countries governments are requiring companies to deploy automated systems to censor speech protected under international human rights standards. So we kind of, some of the call to action that will drive our conversation today is because of the ways that AI is augmenting digital repression, we call for the urgent need to regulate it. And we think the lessons learned over the past decade or 15 years debates really provide a roadmap on how to regulate AI. So first, we need to not overly rely on companies. I think we, you know, at the beginning of the Internet Freedom Project had a big hope of, you know, the Internet’s gonna be this liberating technology, gonna protect democracy, we don’t need to regulate it. Boy, were we proven wrong, so we should be careful and not leave it all up to the private sector. Second, we’ve learned a lot about what good governance actually looks like from the government. So centering human rights standard, increasing transparency over the design, use, and impact of these systems. And then finally, the lesson learned that I don’t think has been learned enough, of civil society around the world really needed to be involved in this process. And right now, in the race to regulate AI, civil society is really being left out, particularly those from the global majority. So we close our report, you know, we think that if AI is designed and deployed safely and fairly, it can actually be used to bolster internet freedom. And there’s a lot of different efforts around the world, AI helping people evade government censorship, being used to detect disinformation and document human rights abuses. But we also note that, you know, as we pay attention more to AI, we have to be really careful not to lose momentum on internet freedom issues more broadly. So reversing internet freedom decline really requires regulating AI, but not forgetting about long-standing threats to free expression, access to information, and privacy. So top-line key findings. I will stop talking for a minute. Again, you can go to freedomhouse.org and read the rest. Olga, I want to start with a question for you. You’ve been working on these issues for quite a long time and wearing a couple different hats. What have you learned about internet freedom over the past decade? What has shifted in this space, and where do you think we are today, and where you think we might be going? Lots of questions.

Olga Kyryliuk:
Yeah, couldn’t be an easier question. But I think literally probably everything has changed during this last 10 years. And when I was thinking and looking back, 10 years ago is exactly when I was starting to write my PhD thesis, and when I came to my law department and the topic which I was suggesting was cyber security, and that was something everyone was looking at me like, this is something not important, we don’t know what it is, just choose something which is common sense for everyone. And then I had to drop that and to look more into like what is this multi-stakeholderism, how this has been developing, and whether at all there is any intersection with the international law. And I think also what has changed is that we had a lot of fascination back 10 years ago, which changed to quite a lot of frustration by now. We were hoping that this multi-stakeholder model and having everyone around the same table would solve a lot of issues for us, and that it would be pretty easy for us to reach a consensus and to find a way how to regulate technology. And we were hoping that at some point probably the legal regulation would be also catching up with the pace how technology is being developed, but still it’s 10 years have passed and we don’t really see that this catch-up has happened. But also there have been many things evolving in a good perspective. I think what we have definitely observed is that the public awareness of internet freedom has raised, and this is also a fair argument to make for every stakeholder, for governments, for private sector, for end-users, for civil society. I think everyone now understands the importance of internet freedom and digital rights, because probably 10 years ago this concept did not make much sense for many people. I believe still sometimes now it is still difficult to explain what is essentially internet freedom as a concept, what it covers, and how we should stand for this. But at the same time this awareness is growing, and this is important because somehow we reach the point when we understand that this is the values which we should be protecting. At the same time, same as positively the technology is developing, there is a lot of innovation, AI is developing, blockchain is developing, they are bringing new opportunities, but they are also bringing a lot of risks and challenges, for example, to security and safety, and there is always this very slippery borderline, where do you find this, how do you divide essentially the freedom and the safety. So, I think it’s very important to see that there is a lot of development of digital authoritarianism in the world. We see a lot of development in the security because in many cases, the governments, they tend to go too much into the security and they tend to limit Internet freedom. So, that’s why on the negative side, we also see that there is a lot of development of digital authoritarianism, and it’s not only by authoritarian authoritarianism, but it’s also a lot of regulation, and we still didn’t see that large-scale shutdowns and that much happening across the world. We didn’t see that much of content moderation and censorship as it is now. We probably could not imagine that we will have so many problems with regulating private companies and tech giants, and that it would be so difficult to find common ground and to agree on the regulation. So, we have to be very, very careful about that, and we have to be very, very careful about how we will solve the issue. We also have, we have also seen that these systems for mass biometric surveillance and facial recognition have developed a lot, and, again, there are countries which are providing these tools and this technology, and there are countries which are simply using it without regulation, and, again, we have to be very, very careful about that, and we have to be very, very careful about how we will regulate human rights or without putting proper legal safeguards, and then this leads to situations when you just don’t have the guarantees in law that you can properly protect your rights. Also, from the positive side, I think it’s still good that we still continue collaborating, we still continue talking to each other. We somehow see that probably maybe some models are not working, and, again, we have to be very, very careful about how we will do that. I think it’s a bit too slow to accept these adjustments to this multi-stakeholder convenience, because, again, we see that many people are not happy, many people want actions, not simply the discussions, some concrete partnerships, some concrete initiatives coming up from these conversations, which is not happening, and I don’t think this is a fair point to make, because, again, we are still in the process of developing, legal landscape is evolving, so we can’t just say that we want to keep discussing this thing if we really can make a difference and can make a change. So many things have changed, and we probably could go into a long conversation about this, but essentially, I think the world is becoming even more complicated than it used to be ten years ago. So, I think that’s what we expected, but that’s where we are.

Allie Funk:
We’re going to pull on the multi-stakeholder thread in a little bit and what meaningful stakeholder engagement looks like. I actually didn’t know that was your dissertation focus. That’s really interesting. But first I want to touch on the regulatory points. you made I think you’re exactly right it’s it’s been I think that is actually something in the field I’m probably most intrigued by because I think the trade-offs around regulation are really complex and who’s I want to pull you in here because you are European folks didn’t know based on the Dutch Ministry of Foreign Affairs title and the EU specifically has served as a global leader on regulating the Internet sort of providing what we’ve think about as kind of this third way for Internet regulation in between the Chinese model and the US laissez-faire traditional approach and we saw with the GDPR the General Data Protection Regulation how it served as a global model for data protection laws after it was enacted in 2018 and we now have the Digital Services Act for folks who don’t know a really ambitious piece of legislation governing online content and a whole host of other things and we’re also in the negotiation process of the EU AI Act so I’m curious you know this has been sort of talked about as the Brussels effect of how what’s happening in the EU is impacting the regulatory state globally how do you think about the Brussels effect and making sure particularly that the good parts of the regulation get implemented elsewhere with the sort of challenge that you know the same law in a country with really strong rule of law standards being implemented in a country with poor rule of law standards has vastly different human rights impact so how do you think about that and what are you all working

Guuz van Zwoll:
on well thank you Ali well we want to keep the good things right but I mean no we I mean it’s difficult I mean it’s a difficult it’s like a tightrope a tightrope that we have to walk I mean also listening here at last day here at the IGF it’s it’s either two things we have to fight censorship and we have to fight disinformation and it’s difficult to do both at the same time right you have to find a balance between the two and and I mean as the Netherlands we are very proud to have this EU laws we would not be able to regulate big tech on ourselves and we were happy to to do it together with with other European countries and we also proud that it’s that that it comes out of a long multi-stakeholder engagement process where we have There have been rounds of input from civil society, from companies, there have been hearings, there have been draft text, yada, yada, yada. And that has, I think, come up with a pretty solid text that we are really happy about and we’re really looking forward to full implementation early next year. I mean, it has started, but we’re building up towards it. So, I mean, there are two ways in how you can see the Brussels effect, right? So, first of all, when the GDPR was implemented in the European Union, some companies said, well, we’re going to implement it for everyone. I mean, it would be just easier to just roll it out over all countries. And the other way is that countries copied the text basically to align themselves into our system and then it would be easier for them to protect privacy in their systems. And this is something that we’re really focusing on as the Netherlands. Last month, we have released the English translation of the Dutch International Cyber Strategy. You can find it on our website, government.nl. And in it, we really state that we are going to propagate the principles of the DSA and the AI Act and the DMA to strengthen this Brussels effect because we do think that these regulatory frameworks provide the right balance between providing a strong regulatory framework while at the same time providing room for transparency and protecting human rights. And that is, I think, that’s the basis. I mean, we’ve argued long and hard and negotiated to get it also into the DSA and it is there. References to the global, the principles on business and human rights are there. There are strong transparency clauses. There’s a way on when your comments on Facebook or any other platform are being removed or downgraded, you’re able to go into appeal. I mean, there will be a whole process for that. And I think those. And that’s all in the text. So when that text will be copied, hopefully those parts will also already are already ingrained into the system. And then that way we try to promote that way of thinking on these issues to other countries. But also, I mean, when we’re going to have bilateral discussions either as the Netherlands with other countries or as the EU with other countries, we also will urge third countries to not only fully or partly adapt to these EU regulations, but also really implement these human rights and democratic clauses that we find so important on this. And this is something that our government is very committed to, and we’ll be focusing on for the next few years. And I would also like to thank you and congratulate you with a great report.

Allie Funk:
Thank you. That was really helpful. Emily, I’m gonna come to you because I think, is this on? Oh yeah, it’s still on. Okay, cool. You, your organization Manusha, you all help run the hashtag stop digital dictatorship, dictator, dictatorship coalition, which is working to, I mean, stop digital dictatorship across Southeast Asia. It’s in the name. And I think that, you know, one of the goals of the coalition is to make rights respecting regulatory frameworks. And I think particularly from the region is very exemplary of how really problematic laws can undermine human rights. So what are you thinking about of what type of regulatory provisions are the most helpful or harmful? How does it relate to AI? If I’m gonna put the buzzword in the zeitgeist, tell me what’s on your mind on this.

Emilie Pradichit:
Thank you, Alina. Thank you for organizing this important session. So I’m coming from a region where according to freedom of the world, we are among 10 ASEAN countries. Six of us, six of our countries are under authoritarian regimes and four of them are. authoritarian regimes. And most of the time when I tell people I’m coming from Southeast Asia, especially Thailand, people are like, wow, because everybody has this impression that Thailand is such an amazing holiday destination. So I really want to emphasize that and I urge people to please read the Freedom of the Net report because if you read the report you will realize that among the countries from Southeast Asia that are being assessed in the report, many of us are not free. Thailand is not free. Cambodia is not free. Vietnam is not free. Myanmar is not free. Indonesia and the Philippines are partially free. Why? It’s because our governments, authoritarian governments, are weaponizing laws. And so there is a proliferation of cyber laws that are targeting dissenting voices and human rights defenders under the name of national security. So in terms of harmful regulation that we have seen growing in Southeast Asia, all the regulations that are meant to protect national security and anyone who is attacking the government or criticizing the government is a threat to national security. And so we have a lot of cases of pro-democracy activists in Thailand, in Laos, throughout Southeast Asia that are being jailed for just voicing and for just telling the truth on Facebook, through Facebook posts. We have a human rights lawyer, Arnon, who was just sent in jail a few weeks ago and who is facing 14 charges under the Complete Terms of Crime Act and the Les Majestés Law and who will face up to 210 years in jail just because he’s calling for monarchy reforms and they are calling for true democracy. So I think there’s a real need for us to look at what are those harmful regulations in Southeast Asia but also how governments in Southeast Asia are also regulating tech companies. So just for example in December 2022 in Thailand, the Thai government passed a decree forcing and obliging tech companies to remove content within 24 hours, any content that is against national security. But again there’s no clear definition of what is national security. So everything can become a threat to national security. So for us what we really want when it comes to good regulation or irrigation that we want to see are regulations that obviously are protecting our online freedom, that are in line with international human rights law, that are protecting our privacy, whereas surveillance is not used against us because you know in Thailand and Indonesia we’re also facing the Pegasus software that is being misused against activists, against journalists, against politicians. So it’s really important for us that we have regulations that are human-centered. And to your question regarding AI, so generative AI can be powerful, right? It can improve our lives but as we heard this morning it also has a lot of risks. And in Southeast Asia we have faced the misuse of AI, especially when it comes to facial recognition, when it comes to surveillance, and also when it comes to bias, especially in terms of language. So if you are in from a Southeast Asian country, our structure, our language structure in some of our countries is Sanskrit or Pali. So if you are using Facebook and Facebook is using AI in terms of content moderation to remove content or to block content that is inviolating the community standards. How can an AI machine can distinguish a word that has in one sound or in one word five different meanings, right? So that’s why for us it’s really important that when we are talking about regulation we are talking about the need to also regulate tech companies. It’s really important for us that we move the discussion from voluntary guidelines and this morning we heard about the Hiroshima AI process. So if you’re an activist on the ground, like I’m a human rights lawyer and I’m working with a lot of activists on the ground, if I’m going back to them saying you know I went to IGF and I heard about the Hiroshima AI process, they’re going to tell me, oh new guidelines, new voluntary measures, where is it going to take us? I think we move to a point where we need real regulations and we need mandatory due diligence. It’s not enough nowadays for Meta, for Microsoft, and for other tech companies to tell us that they are conducting human rights impact assessment that are voluntary and what they are barely doing is just identifying the most salient human rights issues. Then they are engaging us in the stakeholders engagement and they’re presenting to us the most salient human rights issues as if we already didn’t know them. You know we already know the human rights issues, right? So we go through these stakeholders engagement processes where just in that identification of the human rights issues are presented but there is no prevention, there’s no mitigation, there is no addressing those salient human rights issues. But if companies are serious about implementing the UNGPs but also the OECD guidelines for multinationals, they should be able to identify but also address, prevent and address the impact but also provide remedy. So a tech company telling us that the appeal mechanism or reaching out to the human rights team is the best The first remedy offered as of today, it’s not enough. There’s a real need to legislate the UNCPs into real law. There’s a real need for mandatory human rights due diligence, and a due diligence that is actually meaningful, so meaningful stakeholders engagement, not just a tick-the-box exercise, because I think a lot of us in Southeast Asia are tired of being called into stakeholders engagement call, and we give our input, and there’s nothing in terms of follow-up. So meaningful stakeholders engagement, not only with civil society, but also with groups that are directly impacted by the misuse of the platform, by governments, by trolls, you know, in Southeast Asia. We are also facing the proliferation of cyber armies from Myanmar, from Laos, from Thailand. Governments are investing in cyber armies, and we are so small compared to them. When we are one or two people, you know, working in a human rights organization on digital rights, it’s not enough to fight against a cyber army. So how do we do? And when we turn into tech companies for support, there’s nothing they can really do, because they are not being regulated. So it’s time for tech companies to be effectively regulated through meaningful mandatory human rights due diligence, and we need those mandatory human rights due diligence to come from countries where those tech companies are operating, because then there would be an extraterritorial obligation for those companies to make sure that throughout the supply chain, also the country offices, the UN Guiding Principles, and due diligence would be respected. But we also want responsibility and remedy. So we want civil and criminal liability for those companies as well. For example, what happened in Myanmar, and the way that the platform, Facebook platform, has been misused by the government and by other groups to promote hate speech against Rohingya. The fact that nobody’s being held into account is not normal. The fact that nobody’s being held into account in terms of responsibility and criminal and civil liability is just not normal. So we really need an effective mandatory human rights due diligence that would also include impact human rights assessment for AI, and that would include meaningful stakeholders engagement and criminal and civil liability of the company.

Allie Funk:
I think this next year with DSA implementation is gonna be really interesting to see how those requirements of impact assessments is gonna play out. And if you all hadn’t seen, there is the recent, I don’t know what day it came out, but there’s now a new database, thanks to the DSA, where a lot of companies are reporting. different content removal or different actions under the terms of service that you can actually go through, which I think will take a very long time because there’s a lot in there. Let’s go to this question on multi-stakeholder engagement a little bit that you brought up, because I think this is something that we think a lot about. We hear a lot about what does multi-stakeholder engagement mean? How do you make that meaningful? Huz, I’m going to come back to you. You mentioned your international cyber strategy. In the document, it talks about incorporating more emerging countries in internet governance and lays out the importance of multi-stakeholder model of internet governance. How does the Netherlands plan to promote these objectives, particularly as it relates to inclusivity with civil society and also in the global majority who are on the front lines of digital repression?

Guuz van Zwoll:
Well, that’s an excellent question and a difficult one. And we try to, no, we do try to answer it also in our strategy. But so basically, we try to do the following thing. We try to connect in our cyber strategy three strands of work. We try to connect the work that we do on traditional cyber diplomacy, cybersecurity, with digital development work, with our human rights work. And as an overarching team is internet governance there. And this is something that we try to do in those three ways. And we do see it as a kind of, we didn’t mention like that, but I always try to see that either as a tree like a stool, like a milking stool or something, that you can have tree legs in order to keep it balanced. You need to have some form of digitalization in order to be digitally connected. Also, as a country, of course, digital security in order to keep that structure safe. But at the same time, you need principles and good governance to also govern that structure. Otherwise, you’re just implementing. and censorship and surveillance apparatus, right? So what we do as the government is really try to implement it in all our work. So we try to, through our development cooperation work, and we work with that, well, also with our colleagues from Freedom Online Coalition, we try to work on principles for digitalization, for donors in digitalization, in order to improve, well, the digital rollout and connect the last third of the world that’s still unconnected. But at the same time, we do try to get these other principles in place as well. We do try also through the EU Global Gateway, for example, we try to make sure that we are then not only looking only at just getting everyone connected, but also make sure that digital security and then also principles and good governance are part of that equation, and make sure that we, and that through those processes, there’s a multi-stakeholder approach that we’ll get voices from civil society to be part of that discussions locally. But this is still something that’s really in its building block, and this is something that we need to work on. But it’s a clear aim that we set out in our strategy, and we’ll have to roll it out for the next few years. But it’s not, of course, the only thing we do. We also work with local civil society, with our human rights program. We have a strong program called Safety for Voices program, where we try to support human rights defenders and civil society organizations on security, both physically, but most as a strong digital component. So all the programs that we run out that are supporting civil society and human rights defenders have always this digital component to it. So we do also try to mainstream it in those settings. And then we do that. I mean, that’s work done from the Hague, but then also the same principles apply to the work that we do to our embassies. Yeah, I think that’s where we’re at.

Allie Funk:
Great. I’m gonna ask one more for Olga and one more for Emily, and then we’re gonna open it up. up. Time has snuck up on me. So, Olga, for you, you teased that dissertation. So I’m going to press on that a little bit. And I should also add that the Netherlands is also taking chair of the Freedom Online Coalition next year. The U.S. government is chair now. And for folks who don’t know, Freedom Online Coalition, multilateral body of 27 governments now? How many? 38. We’re going to get 27. Wow. I am behind. I’m a bad advisory network here. Working to protect internet freedom around the world. So I’m curious. It’s a two prong question. I’m going to ask the same to both of you all to hear your input. How can governments themselves, what does meaningful multistakeholderism look like to you? How can they make sure that they’re listening to the different sectors? But also, what do you think the role of, you know, the FOC, a multilateral body of democratic governments that are really committed to protecting internet freedom? How can they reverse this decline? Do you have any best practices they can adopt? So you can take any of that. That’s like seven questions in one. So I’ll let you take it.

Olga Kyryliuk:
This is actually what I also want to know. Maybe also since we have this opportunity, maybe also Guus can help to clarify that how essentially a civil society can get better engaged in FOC, especially because this is also part of my job portfolio. I need to identify this connection point because my team is running the largest internet freedom project. We are covering five regions across the world and we are working with 120 implementing partners from civil society. So essentially we have this pool of talent of civil society activists and human rights defenders. And we would like to see what is this entry point, how we can better coordinate, how we can help engage them in your space, and where do you see the value from these people, how they can meaningfully contribute to what you are doing? Because you had this Freedom Online Conference which has not been held for the last few years, which I think was one of the opportunities to get together for different stakeholders to discuss different issues which are important and making trends. But this is not happening anymore. I know there is advisory network, but again this is an election-based process which is also happening in some periods. So I would say if there is any opportunity to organize some kind of periodic consultations with the civil society, to choose some thematic issues so that it’s not just about everything and about nothing at the same time, but to make it very specific, whether you want to focus on some regulatory issue, whether it is something related to AI, I think we would be only happy to support with that and essentially we have a huge variety of expertise. I loved how it was done by FOC and US chairmanship and this is something that Lisa was also leading, this consultation with civil society on the principles for human rights in digital age. It was really nice to have everyone in the same room and everyone essentially truly having the opportunity to express their opinion and we also have the result of the discussion. So deciding something which is very tangible has practical result. This is something which is missing and which we could do more. Thank you, Ali. All right, so in terms of the FOC,

Emilie Pradichit:
but there’s also Michael in the room, so I’m also looking at you in terms of the Forum on Information and Democracy. Working with member states and the potential that you have to support us in countries where there’s no democracy and since the Netherlands will be sharing the FOC next year, I really urge you to help us because our online democracies are under attack and is not going to change tomorrow and 2024 is a very important year because there will be a lot of elections throughout the world, so there will be a lot of demand on the FOC. Honestly, the FOC is not accessible and is not known for the majority of the people from the global majority. So I think the FOC is accessible for DC groups, so online freedom and digital rights groups based in Washington DC. For us, based in Southeast Asia or in the African continent, we don’t know about the FOC and we don’t know how you can help us. So I think the best thing that you could do first is to better promote your work so we can better understand how the FOC can actually support us and actually support us in demand of true democracy. We really need statements coming from IFOC members that are targeting our authoritarian governments. We are trying our best. We are a coalition, the ASEAN Coalition to Stop Digital Retardation. But we are also part of the Southeast Asia CPN targeting tech companies. But we are just a handful of people. So we actually need your support. And there’s a real need for the IOC to look at the global majority and to engage with us. So when you are doing stakeholders engagement, please don’t do them only in DC. There is a need for you to come to us because we need your input. We need your recommendations. And we need your statement to target our governments and also the private sector in our countries. So there’s a need to you to come to us. Why? Because for most of the people from the global majority, traveling to Europe or to the US is not easy, right? There’s visa restrictions. So it’s always the same people that you get to meet. It’s always the people who can travel. It’s always the people who have access to you. There’s also a need for the IOC to not only talk to the traditional digital rights organization, but to the broader human rights field. The digital space is becoming more and more important. I mean, we’re all moving into the metaverse. What’s happening offline is now happening online. So there’s a need also for human rights groups to understand and to engage with the FOC. So really, looking at us, inclusivity is key, engaging with the global majority, and bringing the FOC to the global majority countries is really important because not everybody will be able to travel to you, invest in civil society, being able to engage with you. Financially supporting groups that are fighting against authoritarian governments online, it’s also very important because most of the time, not everybody can engage and not everybody can do this work. Also, this need to understand that the work that we do is also putting us at threat. A lot of us sometimes cannot speak publicly or cannot engage. A lot of activists have to speak or have to remain anonymous. I mean, Freedom of the Net report has a lot of anonymous authors as well. So there’s a real need for IOC to look at the global majority and to understand us, to come to us, and to also financially support us because we need this report to be able to fight against digital ratioship.

Olga Kyryliuk:
Because what Emily was saying, I was also thinking that you have this access to governmental people, essentially, which is usually what is missing a lot on, let’s say, not at the global IGF, probably, but at the regional discussions because we also have regional, national IGFs. And it is always a struggle to get these governmental representatives to be present in the room. So I would say you can also focus on working at least with those countries which are members of FOC so that to somehow encourage and maybe also to build connections between them and these local regional communities because they could be part of these conversations. They could get into some specific partnerships and work on some issues together. I think from my region of Southeast Europe, it is maybe only Georgia and Moldova who are members of FOC. But at least at that level, at least those few countries, because I know I’m also part of IGF for Southeastern Europe. And well, I know firsthand experience how challenging it is to get in touch with governmental people. So that would be also very practical help from your side, just at least to help to get connected with these people and to have them in the room.

Allie Funk:
Is there anything you want to say before we go to Q&A about the FOC?

Guuz van Zwoll:
Well, I mean, these are very concrete and thoughtful points. We’re writing our plan of action as we speak. I mean, we just had it out for consultations with the AN network. And I mean, these are great points that we’re happy to digest and bring them further. I think that it’s very interesting to say that the Freedom Online Conference is a missed, that it’s being missed. It’s very nice to hear because we did, there was, I think COVID was the first reason not to organize it. But also because there’s already so many conferences we’ve got right. we’ve got IGF. So I mean, it would be good to discuss maybe later to see how we can make best use of the space and time and core footprints that we have to make sure that we can make use of that. And the other points on, I think, many of them, at least myself, but also many of our colleagues within the FOC are very open always to have discussions with human rights defenders and digital defenders. So I think it would be great to maybe see if we can promote that strand of work and to have direct contact outside of the AI network. We could also have a long talk about representation in the AI network and I think we should also have that. But I mean, these are very valid points and we’ll certainly take them forward. One last thing on the security side. As the FOC, we did create a group called the Digital Defenders Partnership, which is focusing on holistic support for human rights defenders and digital defenders at risk. And that’s specifically aimed at digital defenders and civil society groups that are facing online threats, but also now physical and psychological threats, etc. And they are, I mean, that’s one of the concrete results that we continue to support as the FOC. So we do try to keep an eye on it, but it’s always great to have concrete suggestions on how to improve these things. Thank you.

Allie Funk:
I’ll just make a pitch. I mean, if RightsCon is not happening until 2025, there is a little space in our calendars for an FOC conference. I can see if we can invite everyone to the Netherlands. All right, everybody, we’re going in the Netherlands. You’re gonna kill me. All right, we’ve got 15 minutes. I want to open it up to y’all. Who has a question? Anybody? Hi, Lisa. Oh, yes, Jit.

Jit:
Yeah, thanks, everyone, for this fabulous discussion. I learned a lot. You know, in thinking about how we can make meaningful impact since we’re at a UN conference, curious to hear what people think about the global digital compact, pros, cons, what we see happening with it.

Allie Funk:
Step right in, if anybody wants to take that tiny question. Yeah. And we also have questions. We can just get them all, maybe, and then. Oliver?

Audience:
For the gentleman, you mentioned that you can support, provide some type of support for people who are under some sort of threat for their online activism. So I was wondering if you could explain what type of mechanisms you have available in terms of what? To send lawyers if they’re already in prison or something like that? I’m just curious to know, what exactly do you mean by that? Bearing in mind the geography, bearing in mind different juristical systems, and so on, and so on. What is, is not crime in given legislation? Thank you.

Allie Funk:
Just going to collect them all. We’ll do Oliver, and then Lisa. Then we should answer some, because I’ll have a lot of questions.

Oliver:
Hi, this is Oliver. I won’t give my organization name, if you don’t mind, just because of security reasons. But I think it’s really important for FOC to be a bit more clear with the outside world about what they’re doing in regards to the UNESCO guidelines, which the global CSOs in the global south are extremely concerned about the direction of the guidelines and how they will encourage authoritarian states to crack down on the digital space. We haven’t seen much from FOC, not that we ever would really see it. But it would be very useful to know that behind the scenes, there is actually some pushback on something that looks like it’s being driven by authoritarian state members of UNESCO. Thanks.

Lisa:
Hi, everyone. I’m Lisa from USAID. So I’ve been doing a lot of stakeholder consultations this year in different countries where we are doing work or trying to scope out potential for new work. And one of the things that keeps coming up when we talk about international human rights frameworks and the GDPR and the DSA and the DMA and the EU AI Act and all of these frameworks is that other countries, particularly in the global majority, see the risk-based European model. They see the laissez-faire industry-based American model. They see the Chinese state-based model. And they don’t want to have any of those models plopped into their space. They’re thinking about, what is this third way? So it’s very Cold War rhetoric of we’re in the third space. And what does that mean? And how are we going to figure out a regional approach, perhaps, or a national approach? And I think one of the key concerns is that when you plop the GDPR into Serbia or Indonesia or Kenya or wherever, there are certain aspects of the regulation that are extremely onerous for countries that are at a different income level than a lot of European countries and that are very challenging to implement when you don’t have the oversight capacity. And there’s perhaps lack of political will and politicization of some of these oversight bodies. And so that’s also a concern. And so I’ve sensed that there’s a real frustration among a lot of actors in civil society and local tech in different countries with this very what people have expressed as a heavy-handed, the international human rights framework is the thing to implement everywhere. And so what are your thoughts? It can be for anyone on the panel about how to navigate that so that you still have the overall protections and safeguards that are being transferred to the extent that they’re going to be useful in those contexts. for human rights defenders and activists and the like, but you’re not imposing aspects of that regulation or imposing at all really, or like there’s a space for a conversation about what the human rights protections and safeguards look like in different contexts.

Allie Funk:
Anything else before we dive on in? Okay, all right. Who wants to start? My esteemed panelists, Olga, there you go. And I can also repeat the questions if need be and make sure we answer them all.

Olga Kyryliuk:
So on global digital compact, I think this is the same thing for me as for Freedom Online Coalition. I would want to see more clarity about what is happening, where it is going, and especially for civil society how to be part of that because there is a lot of frustration at the moment as to how they can engage. And same, we were trying to see how we can support our implementing partners to engage in this process and we don’t really see a clear way or a clear venue where this can happen. For the regulations, for Lisa’s question, I think the problem is that we think everything which is coming from the EU is just will solve all our problems. This is ideal and the standard which we all should be using, which is, as you’ve mentioned, has its own challenges once we start to implement and go to enforcement phase. But I think there are always, there is the framework of principles and standards which, let’s say, are basic and which can be replicated in every single country. But then you also should be aware that if you go into some detailed regulations, then they should be also conscious of the context where they are being thrown to. So it requires a dialogue and a conversation with the national legislators, but also probably some capacity building for them to understand that it’s okay, because what countries are doing, they just take the text of GDPR and implement as their national law. And then when it comes to implementation, now we have to face a lot of challenges, but then what you can do, the law is already there. So it has to be done at a little bit earlier stage when just some specific legal act is being incorporated into the national legal system.

Emilie Pradichit:
Thank you. So I’m gonna answer the question related to the protection of human rights. As Olga said, there is a need to also understand the local context. And most of the, I mean, most of the Southeast Asian governments, and I’m gonna talk about Thailand mainly, is that we have a Data Protection Act. You know, and what the Thai government said is that. Oh, we just took the GDPR and we developed the Data Protection Act, so we are, you know, following the EU example. But there’s no real oversight, there’s no independent oversight, it’s full, it’s totally government-led oversight and there’s no remedy. And there’s an exemption into that law that allows the government to violate our data under the consideration of national security. So governments, I would say, are really good, you know, to replicate what the EU is doing, which is a challenge for us because we want them to engage in a dialogue with parliamentarians but also with civil society. And what governments are doing is that they’re saying, I’m taking the German example, I’m taking the EU example, and I’m developing this law. And it’s government-led, it’s from the executive, it’s not from the legislative, and it allows the government not to engage with civil society. So there’s no dialogue, so that’s a real frustration for us. And they think that then they go into diplomatic discussion with diplomats in the country but also at the global level at the UN saying, we are following global standards and we are following good standards because we are in line with the EU. So it’s a real challenge for us because then diplomats believe it. So diplomats are then congratulating Thailand for having a Data Protection Act instead of really looking into the act because the act is in Thai unless a civil society translates it for the international community to know. So it’s really important for us that I don’t think civil society is against international human rights law. Like, we all follow international human rights law. Actually, we want governments to respect international human rights law. We just want to make sure that when there is an exchange between global north countries and global majority countries, that this exchange takes into consideration our context and that governments, like when they are exchanging the Thai government or the Lao government going to Australia or Australia to look at AI, for example, AI regulation, or when the Thai government is saying we are putting together an AI advisory committee and are inviting experts from all around the world, it’s just to appear as a good student or it’s just to appear as a good member state at the UN. But in reality, they’re just fooling the world. And never, ever we have the expert and the other government engaging with the Thai government, helping to develop those laws. Who is asking the Thai government, but where is civil society? Where is the dialogue with civil society? Where is the dialogue with parliamentarians? So this is where the frustration is coming. It’s the lack of dialogue and it’s the lack of understanding of the context. And it’s how easily EU member states and also the US and the international committee can be fooled by our governments. Thank you.

Guuz van Zwoll:
That was a great point. And I mean, I think that for us, I mean, I think that although there might be some people that have hoped it, I think that the worldwide rollout or effect of GDPR came to us. where everyone was a little bit surprised, right? And then we’d start claiming Brussels effect and stuff like that. But I mean, I think that we didn’t really plan on it to be, well, it was not there in the room, I don’t know. But I mean, I think that we were, I would expect that, I mean, we’re diplomats, we’re human beings, we are, we’re from nine to five. And I mean, I don’t, but I mean, so the point being is that I think that we have to learn by doing on this. And I think that we, I mean, your feedback on this is extremely helpful. And each time we’ll get better at it. And, but we need your honest and open criticism on these things in order to do learn from it and to do implement it. The next time we’ll have these discussions on how are we going to have shared approach on AI? Or how are we going to have a shared approach on the DSA or the DMA? So that’s something that I would just urge everyone to keep doing and then also reach out to not only the embassy, but also try to, well, I mean, try to find the advocacy focal points to because these are the ones that are probably more resonating to these arguments than someone who’s covering 27 issues because we’re two people in the embassy. So that’s, I mean, that’s just very challenging. And so, yeah, I would try to do that. As on the UNESCO guidelines, I think that that is indeed, and we’ve been following that progress with great interest. We have, as the FOC, we did try to, we did approach it. We have the advisory network wrote terrific comments on it. And we took that all at heart when talking to UNESCO and then participating in the Internet for Trust. Conference, I mean, this is not completely FOC, but I do want to mention our recently launched global. declaration on information integrity that was signed by 30 countries, and more countries are signing on to it, which do try to say, well, it’s very important that we are going to fight disinformation and make sure that we are promoting information integrity, but at the same time, we do need these human rights guide rails, so to speak, in these international processes like the UNESCO process, but also the Code of Conduct that’s being run by Under Secretary General Fleming, to make sure that the human rights language is there in those processes, so that’s something that we are really pushing for us as the Netherlands, and with 30 other countries, including the US, the UK, but also countries like Brazil and Argentina and Chile have signed up to those principles. We do try to promote that in that way. About the GDC, that’s just, I mean, I think that it’s also very difficult for us, at least as diplomats, for me, to follow it. I mean, the process, I mean, there have been some stakeholder rounds, we attended those, I mean, they were open to watch online, I mean, you know as much as I do. I mean, it’s just, yeah, we are following it, and we try to make the best of it, and we do think that it’s great, at least in the chapters or in the sections that are there, human rights online is really there, so we do have good hope for it, but we have to see how it will develop, and for us it’s really a question on how this is something that we also set out in, pretty publicly I would say, it’s even in the strategy that we say, well, I mean, we have to strike a good balance between the GDC and UISIS, and they are both very important, and we have to find a good way in protecting human rights online, we have to find a way to encapsulate multi-stakeholderism in these governing processes, but at the same time, we have to make also sure that these processes are really transparent, that everyone can engage, that the global majority countries have a seat at the table, that we include them into the process, and that’s something that we… remains, that remains a constant challenge. I mean, but that’s always, of course, a challenge in these issues. Yeah. And then on supporters, support for human rights defense at risk. The Netherlands funds tons of NGOs and initiatives to protect human rights defenders who are at risk locally. So we, for example, fund Frontline Defenders that has, I think, 12 regional coordinators all over the world speaking. Well, I mean, Southeast Asia is, of course, difficult with tons of languages. But, for example, in Latin America we have where they speak local languages. They are there. I mean, they’re someone for Southeast Asia. But they’re really trying to provide practical, holistic support for at-risk human rights defenders, both in a legal way but also courses in physical protection, digital security, psychological well-being, etc. We fund that with Frontline. We have Reporters sans frontières we fund through the EU. We support protect defenders that has a conglomerate of, which is a consortium of 13 organizations that are doing this worldwide. I mean, I think that there are tons of organizations that are doing, that do try to provide these kinds of direct practical support for at-risk human rights defenders. And some of them are even here. I mean, Access Now has a booth. They have a helpline. They’re connected with Defend Defenders to work together with Frontline. And if you want to know more about it, I’m happy to to speak for hours about this topic because I’m really passionate about it.

Allie Funk:
These microphones, tricky. Well, thank you all. We’re at time. I think that we could go on for a really long time. All these, there’s just so many initiatives. I’m so tired. I’m sure everybody else is. I’m like, we’ve got a seven-person team. We have to make tough decisions about how to engage and when not to. And I’m grateful that we’re in partnership with all the fantastic panelists, for people in this room. that we’re doing this work together. And I won’t hold you back from dinner anymore. I know we’re all hungry as well. So thank you for joining us. A pitch again, you can read the latest Freedom on the Net report, freedomhouse.org. Let us know what you think. And looking forward to a great week. Thanks all.

Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Allie Funk

Speech speed

179 words per minute

Speech length

2358 words

Speech time

789 secs

Audience

Speech speed

123 words per minute

Speech length

436 words

Speech time

212 secs

Emilie Pradichit

Speech speed

197 words per minute

Speech length

2788 words

Speech time

849 secs

Guuz van Zwoll

Speech speed

182 words per minute

Speech length

2770 words

Speech time

912 secs

Jit

Speech speed

187 words per minute

Speech length

53 words

Speech time

17 secs

Lisa

Speech speed

162 words per minute

Speech length

418 words

Speech time

155 secs

Olga Kyryliuk

Speech speed

183 words per minute

Speech length

2025 words

Speech time

663 secs

Oliver

Speech speed

195 words per minute

Speech length

141 words

Speech time

43 secs

Unstoppable Together:Digital Grassroots Impact Report Launch | IGF 2023 Launch / Award Event #143

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Estelle

In this extended summary, we have two individuals, Estelle and a team, who express positive sentiments about their achievements. The team’s hard work and dedication resulted in the completion of an impact report, showcasing their remarkable accomplishments. Their efforts have led to the creation of new young leaders from their side of the world, highlighting the team’s ability to make a lasting and positive impact on their community. Estelle, in particular, takes great pride in the team’s success.

Estelle also strongly believes in the importance of representation and recognizes its significance in creating a fair and inclusive society. To promote representation, Estelle initiated DIGRA programs with the aim of fostering increased representation from their side of the world. These programs are designed to empower individuals and provide them with opportunities to make their voices heard, aligning with the goals set forth by SDG 10: Reduced Inequalities.

The positive sentiments expressed by both Estelle and the team reflect the significance of their achievements. Through hard work and dedication, the team’s impact report serves as tangible evidence of their success. Moreover, the creation of new young leaders signifies the team’s ability to inspire and cultivate future talent. Estelle’s commitment to representation further emphasizes the importance of diversity and inclusion in various domains, including the Internet governance ecosystem.

This analysis sheds light on the remarkable accomplishments of the team and Estelle’s dedication towards creating positive change. Through their efforts, they aim to reduce inequalities and create a more inclusive world. The success of their initiatives serves as an inspiration for others, encouraging them to follow suit and make a difference in their respective communities.

Audience

During the event, the audience expressed concerns regarding the lack of multilingualism and the predominance of English-speaking Africans at the Internet Governance Forum (IGF). The audience specifically highlighted the need for the IGF to promote a multilingual environment. One audience member from Cameroon expressed surprise at learning about the project for the first time at the event. This observation drew attention to the necessity of reaching out to countries where English is not the primary language of communication.

The call for a multilingual environment at the IGF aligns with the goals of inclusivity and reduced inequalities, as outlined in SDG 9 (Industry, Innovation and Infrastructure) and SDG 10 (Reduced Inequalities). By accommodating various languages, the IGF can ensure that individuals from diverse backgrounds have equal access and representation in shaping internet governance.

In addition to the language barrier, an audience member from Cameroon also highlighted the need for clarification on how to become an ambassador for the Digital Grassroots Movement. This request reflects an interest in actively participating and contributing to the movement’s objectives, particularly those related to quality education (SDG 4) and reduced inequalities (SDG 10).

Overall, the audience’s concerns and requests highlight the importance of promoting inclusivity, reaching out to non-English speaking countries, and providing clear guidelines for participation. Addressing these issues will enhance the effectiveness and impact of the Digital Grassroots Movement and create a more diverse and inclusive environment at the IGF.

Nancy Wachira

Nancy Wachira’s journey with the Digital Inclusion and Governance Research Alliance (DIGRA) has been instrumental in her growth as an advocate for digital inclusion. Since joining DIGRA in 2018, Nancy has actively engaged with the organisation and has become an essential part of its efforts to bridge the digital divide.

One of the key ways in which Nancy has contributed to DIGRA’s cause is by representing the organisation at various international events, such as the Commission on the Status of Women. This involvement has not only provided her with a platform to share her insights on digital inclusion but has also allowed her to network with like-minded individuals and organisations. Through these interactions, Nancy has been able to broaden her perspective on the issue and gain a deeper understanding of its global impact.

Furthermore, Nancy’s work with DIGRA has had a specific focus on reducing digital inequalities in rural communities. She recognises the importance of ensuring that people living in remote areas have equal access to digital technologies and opportunities. By actively working towards this goal, Nancy is actively contributing to the United Nations Sustainable Development Goals (SDGs) of Industry, Innovation, and Infrastructure (SDG 9) and Reduced Inequality (SDG 10).

In addition to her involvement with DIGRA, Nancy also acknowledges the significant impact of her mentors and the supportive community within the organisation. Mentors such as Esther, Ufa, and Wadhangi have played a crucial role in guiding and shaping Nancy’s advocacy journey. Their expertise and guidance have provided Nancy with invaluable insights and teachings, enabling her to further develop her skills and knowledge in the field of digital inclusion.

Overall, Nancy Wachira’s involvement with DIGRA has been transformative. Her active participation in the organisation, representation at international events, and focus on reducing digital inequalities in rural communities highlight her dedication to the cause of digital inclusion. Furthermore, the influence of her mentors and the supportive DIGRA community has significantly contributed to Nancy’s growth and success as a digital inclusion advocate. Through her efforts, Nancy is making tangible contributions towards achieving the SDGs and creating a more equitable digital future for all.

Grace Zawuki

Grace embarked on her DIGRA journey in 2022 when she participated in the Digital Rights Learning Exchange, which proved to be a transformative experience for her. This opportunity equipped her with valuable knowledge and skills in the field of digital rights. Recognising her potential, Grace was subsequently selected to join the prestigious Community Solutions Program, solidifying her dedication to addressing digital rights issues in the United States.

Grace expresses her profound gratitude for the DIGRA community, which has shaped her perspective and fostered her personal and professional growth. She acknowledges the invaluable impact DIGRA has had on her journey and credits it for her positive transformation.

Collaboration emerges as a crucial factor in this context, with Grace highlighting its potential to make a significant difference in communities and elevate Africa’s global standing. Emphasising the power of collective efforts, Grace and her fellow advocates strive to effect positive change by addressing digital literacy and digital rights issues.

Grace’s own experiences serve as evidence supporting the argument for collaboration and its benefits. By working with individuals from diverse backgrounds and areas of expertise, they can adopt a comprehensive approach to solving complex challenges. Furthermore, their collective efforts not only improve their own communities but also position Africa as a hotbed for innovative solutions in digital rights.

In summary, Grace’s involvement in DIGRA and the Community Solutions Program is a testament to the transformative power of such initiatives. Through collaboration and a shared commitment to enhancing digital literacy and digital rights, Grace and her team make a meaningful impact in their communities, propelling Africa into the spotlight as a catalyst for positive change.

Stanley Junior Bernard

During the discussion, the speakers delved into several topics pertaining to digital rights, internet governance, and internet accessibility. They underscored the importance of advocating for digital rights and internet governance, recognizing that these areas play a crucial role in shaping the future of the digital landscape.

One notable point raised was the positive impact of the training received through Digital Grassroots in understanding digital rights and internet governance. This training not only enhanced the participants’ knowledge but also equipped them with the necessary skills to actively advocate for these rights.

Moreover, the speakers highlighted that the advocacy for digital rights and internet governance led to significant recognition. For instance, one speaker mentioned being awarded a scholarship by the One Young World due to their involvement in championing digital rights. This achievement underscores the recognition of the importance of such advocacy efforts on a global scale.

The significance of an open and accessible internet was also emphasized during the discussion. It was noted that although internet connectivity remains challenging in countries like Haiti, there is a shared belief that the internet should be accessible to all, not only in developed nations but also in the global South. This argument stems from the understanding that a more equitable and inclusive internet access can help foster reduced inequalities and promote innovation worldwide.

Additionally, the speakers expressed their support and admiration for the work of Digital Grassroots in building digital capacity for marginalized youth. Specifically, they praised the innovative program called the Digital Rights Learning Exchange, which was highly regarded for its ability to empower marginalized youth.

Overall, the discussion provided valuable insights regarding the significance of digital rights, internet governance, and internet accessibility. It highlighted the importance of advocacy efforts, the need for an open and accessible internet for all, and the crucial role that organizations like Digital Grassroots play in building the digital capacity of marginalized youth globally.

Hanna Pishchyk

Hanna Pishchyk, who is currently based in France, is the Communications Lead at Digital Grassroots. She plays a crucial role in acknowledging the efforts and impacts of DIGRA community members. Digital Grassroots is a community of Internet governance advocates focused on sharing knowledge and experiences. They aim to achieve global digital inclusion, reduce digital inequalities, and promote digital literacy. Nancy Vachira, a member of DIGRA since 2018, works towards reducing digital inequalities in rural communities and represents DIGRA in various events and initiatives. Stanley Junior-Burner has been an impactful member of the DIGRA community, contributing to various projects and leading a successful DIGRA mini-hackathon in Haiti. Stanley also promotes digital literacy and mitigates gender-based violence through platforms like the Young Girls Empowerment Initiative in Haiti. The efforts of Hanna, Nancy, and Stanley highlight the importance of industry, innovation, and infrastructure in achieving Goal 9 (Industry, Innovation and Infrastructure) and Goal 16 (Peace, Justice and Strong Institutions) of the Sustainable Development Goals.

Uffa Modey

Digital Grassroots is a youth-led non-profit organization founded in 2017, with a focus on promoting digital citizenship and advocating for internet rights in underrepresented regions. The organization conducts advocacy programs and digital rights learning exchange programs as part of their efforts. One of their flagship initiatives is the Digital Grassroots Ambassadors program, which aims to raise awareness and advocate for the internet in local communities. By engaging with young individuals in underrepresented regions, Digital Grassroots aims to bridge the digital divide and reduce inequalities.

Uffa Modey, the co-founder and global lead at Digital Grassroots, strongly supports the creation of pathways for young individuals to understand and navigate the internet ecosystem in their communities. She believes in collaborative work towards digital rights and internet governance with others in the global internet ecosystem. This demonstrates the organization’s commitment to fostering partnerships and creating a collective impact.

The Unstoppable Together report summarizes Digital Grassroots’ work over the past five years. Collaboratively created with the community, the report provides an ownership perspective and showcases the experiences and challenges related to digital rights abuses. It highlights the importance of community engagement and inclusivity in sustaining the work of Digital Grassroots. The organization recognizes the crucial role of community resources and contributions in their digital rights advocacy efforts.

Digital Grassroots also extends its reach to Francophone-speaking countries in Africa, running a specific training program on internet governance and digital rights for these regions. This demonstrates the organization’s dedication to addressing regional needs and empowering individuals in Francophone-speaking communities.

Additionally, Uffa Modey acknowledges the language barrier as an issue in internet governance. This shows the organization’s awareness of the challenges faced by different communities and its commitment to creating accessible platforms and materials.

Finally, Uffa Modey emphasizes that Digital Grassroots is continually looking for innovative ways to involve more people in internet governance. Their commitment to openness and a proactive approach ensures that the organization remains dynamic and responsive to changing needs and circumstances.

In summary, Digital Grassroots is a youth-led non-profit organization focused on promoting digital citizenship, advocating for internet rights, and bridging the digital divide in underrepresented regions. Through their advocacy programs, initiatives like the Digital Grassroots Ambassadors program, and collaborations, they strive to make a positive impact and empower communities in their digital journey.

Rachad Sanoussi

Rachad Sanoussi, a technical support member of Digital Grassroots, introduces himself as he takes the stage to present the impact report. He expresses his optimism and excitement for the launch, firmly believing in the collective force of the organization and the community in effecting change in the digital space. Rachad’s deep-rooted faith in the team’s abilities and capabilities shines through his speech.

During his presentation, Rachad graciously acknowledges the team’s hard work and dedication in delivering the impact report and successfully executing DIGRA programs. He expresses gratitude towards his fellow team members for their active engagement and valuable contributions. The significance of the impact report launch is highlighted by Rachad, emphasizing its importance to the organization.

Looking to the future, Rachad anticipates further progress and eagerly looks forward to continuing the journey with the team. He expresses his belief that together, they are unstoppable, and he is determined to build upon the current foundation for even greater accomplishments.

Notably, Rachad emphasizes the inclusive nature of Digital Grassroots programs. He shares his own experience of hailing from a French-speaking country, Benin, and stresses that the organization welcomes participation from individuals regardless of their language or country of origin. This underscores the importance of inclusivity and promotes the message of accessibility and universality within the digital grassroots movement.

In conclusion, Rachad’s introduction of the impact report is marked by his optimism and excitement for the launch, showcasing his belief in the collective force of the organization and community. His gratitude towards the team and anticipation for future progress reflects his dedication and commitment to the cause. Furthermore, his emphasis on inclusivity and the organization’s open invitation to participants from all languages and regions highlights the significance of diversity and accessibility in digital grassroots programs.

Session transcript

Rachad Sanoussi:
Okay, good morning, everyone. Good morning, participants. I don’t know if they can hear me online. Okay, perfect. Okay, perfect. I think we will start our session, and welcome everyone for this session. My name is Rashad Zanussi. I am technical support at Digital Grassroots, and today we will start our session. It’s a great pleasure for me to welcome you all for this significant event to launch our impact report. So today we gather here not only as a community, but as a collective force driving change in the digital space. So as we start this journey since 2017, our organization do a lot of things. So today we are happy to have you all for this launch. So I’m here with my colleagues, and I will let them introduce themselves. So over to you, Ufa. Can you hear me online? Yes, can you hear me?

Uffa Modey:
Hi. Yeah, we can hear you. Thank you. Hi, yes. Thank you, Rashad. Good day, everyone, and thanks for joining us here today. My name is Ufa. I am the co-founder and global lead at Digital Grassroots. I am a software engineer and technology potency analyst currently residing in Newcastle, UK from Nigeria. I don’t know if I can put on my video as well. Okay. Yes, that works. So, yeah. Thank you so much for joining us. And unfortunately, I can’t be present at the IDF in Japan, but we’re really, really happy to have you here with us today. As many of you know, Digital Grassroots is a youth-led nonprofit organization that is focused on increasing digital citizenship for young people from underrepresented regions with respect to internet governance and digital rights. We were founded in 2017 as one of the outcomes of the Internet Society Youth at IDF Fellowship. And since then, we have been doing a lot of work around digital literacy for young people to enable them access the services that they need to excel in the digital age, as well as engaging them in community engagement projects with regards to digital rights and internet governance, enabling them to understand the internet ecosystem in their local community in order to properly advocate for various instances and challenges of digital rights and internet governance abuses in their own local communities. And because of that, we are now, at the end of every year, we like to congregate at the IDF to highlight the good work that has been done in our communities, to talk about how we go around and navigate these digital rights issues in our communities as well. So today, we’re here to talk about our impact report. We would be showing how we have engaged in the last five years and the work that we have been doing with regards to building our communities, engaging in our programs. We have two flagship, we have a flagship program called the Digital Grants and Ambassadors program, which we run in coordination with our community leaders for advocacy programs, as well as our digital rights learning exchange programs. All of these programs are avenues and pathways that we are using as a method of getting more young people to be aware of how to advocate for the internet in their local communities, as well as how to connect and collaborate with other participants in the global internet ecosystem where they can come together and do this amazing work. So that is why we’re here today. And I’m really looking forward to presenting this impact report to the global community and getting everyone’s input. Thank you very much for joining us today. And over back to you, Rashad.

Rachad Sanoussi:
OK, thank you so much, Ufa. And I think we can move forward for the session. I would let you. I don’t know if Esther is already online. Yeah, let me check. OK, I will give the floor again to Ufa to present the impact report further before we launch it. Thank you, Ufa. Over to you.

Uffa Modey:
All right, thank you very much, Rashad. So as many of you who have had a chance to pass by our booth at the IJF Village, you should be able to scan a copy of our report and download it. I’m sure Rashad also has some copies of the report that can be passed around to be scanned. And this report is called Unstoppable Together. It is a summary of the work that we have been doing in the past five years. It highlights so many of our community members. Digital grassroots is not just an organization, it’s a community. And why is this community based learning important? This is important because as young people from underrepresented regions, every single resource that goes into doing the digital rights advocacy work that we do is very, very crucial to us. So this report will enable us to tell our stories from an ownership perspective, to be able to put out the work that our amazing community has been doing in their various capacities and the various resources that have been made available to them. This report was made in collaboration with the community. It was done in a bottom up way, using stories, highlighting the work, showcasing the experiences and the lived challenges and different instances of digital rights abuses that has been occurring in these various communities, talking about freedom of expression and privacy, surveillance, hate speech, inclusion, accessibility and other issues that would hinder the open access of the Internet in so many local communities. Us, the people together, is not just a one off report that we want to put out. It’s an entire journey that shows a pathway to what where we are going towards the digital future that we are trying to build. And we want this to be something that we can build upon. So we want your feedback. We want your input. We want you to use this as a channel to get to know more about our work and how you can be a part of it. Us, the people together, also highlights the key ways that you can be a part of our community and how people can contribute to our community, which is very crucial to us. The work that we do cannot be sustained if it is not open and if it is not inclusive. That is something that is super important to us as well. So please engage with the report. We want to hear from you. We want your feedback. We want your contribution. We want your collaboration in every single instance of the way. And yes, we’re also going to use this as a platform to highlight some of our community members who are doing so many amazing work in the communities. And we want to use this as a platform to also recognize this work that they are doing. And please, again, before I tap out, make sure you engage with the report and you engage with the work that we do and that’s coming out of our communities. Thank you very much.

Rachad Sanoussi:
Thank you so much, Ufa. As she was saying earlier, we have some community members who are doing a good job in our community, so I would like to invite Anna to present this community member. Thank you. Over to you, Anna. And also, I have some copies of the impact report, so hard copies, you can come to take some if you want. Thank you. So, Anna, over to you.

Hanna Pishchyk:
Thank you, Rashad. I hope you all can hear me. My name is Hannah. I’m a communications lead at Digital Grassroots. I’m coming from Belarus, but I’m currently based in France. Yeah, and I think we’re coming to the most exciting part of this session for us at DIGRA, where we get to celebrate and acknowledge the amazing impact that our community members have been doing, because as Ufa mentioned, we are an organization, but we’re also a community of people who are driving the knowledge and experiences that we get to, that we try to transfer to the communities across our global network. And the stories of the people that we are happy to recognize today, they are testament to DIGRA’s spirit and values of fostering digital literacy, advocacy and impactful leadership in Internet governance. The first person I would like to recognize, and importantly, do not hesitate to and be very generous with the clapping emojis. When we recognize people, I think it’s a very cool option that we have here. Yes, the first person I would like to acknowledge is Nancy Vachira. Since joining DIGRA in 2018, Nancy has magnified her impact in the digital space, leveraging her journey from a participant to a youth leader in global Internet governance initiatives. Nancy utilized DIGRA experience to become a global digital inclusion advocate, working towards reducing digital inequalities in rural communities through her international engagements and representing digital grassroots at the events like Commission on the Status of Women, to involvement with IGF, ISOC and other key initiatives. Nancy has been advancing DIGRA mission at a global stage, ensuring that the efforts to bridge digital divides resonate across different communities and inspire active participation in the digital space. Nancy, I would like to give you space. It’s OK now, you can speak.

Nancy Wachira:
Hello, everyone. Thank you for this opportunity. And I’m so grateful to be part of this event and to be able to share my experience with you. And I would like to invite you to join me in welcoming Nancy Vachira to the stage. Thank you. Hello, everyone. Thank you for this opportunity. And I’m so grateful to be part of this community since I began and joined the community in 2018. It was the first I was in the first cohort when DIGRA just began and I didn’t know much about the digital space or what to really expect as I began my journey. But out of curiosity, I just followed through and participated in the digital space. I had done information technology back in my university, but I didn’t know where to begin to grow myself, to be able to speak up and to champion issues that can bring positive changes to people in the community. So DIGRA was my first community. I’m really grateful for my mentors, Esther Ufa and Wadhangi. They really held my hand and showed me what to really do in this space. And as I kept growing on, I have been in IGF space and I have contributed. And recently this year, I represented DIGRA community at International Women’s Day in New York. It was a great platform to share my story and how I began, where I am and the impact I’m still creating. So I’m really grateful for this community and together we can be able to achieve much and to do much as we keep on growing and growing young people, helping a hand and showing the way. Thank you, everyone. And I hope we both participate and get to grow ourselves to the better.

Hanna Pishchyk:
Thank you so much, Nancy, and thank you to everyone who’s reacting with the emojis. The next person I would like to introduce and acknowledge is our community member from Haiti, Stanley Junior-Burner. Stanley has magnified his impact as a DIGRA community member, championing youth empowerment and Internet governance on a worldwide stage. Actively engaging with DIGRA across years, Stanley has shown leadership in several of our projects, notably leading a DIGRA mini-hackathon, which has been a huge success for DIGRA in Haiti. Stanley is also the co-founder of the DIGRA several of our projects, notably leading a DIGRA mini-hackathon. Stanley’s leadership in his home country also drives the Young Girls Empowerment Initiative, where he tirelessly works towards mitigating gender-based violence and fostering digital literacy through various platforms, including the local chapter of the Internet Society and Youth Observatory. Stanley has transitioned his insights into action advancing our cause of building youth Internet leaders, both within his community and on a global stage. Stanley, please, the floor is yours.

Stanley Junior Bernard:
Hello, everyone, and thank you. Can you hear me? Yes, we can hear you. Yes, we can. OK, thank you. Thank you, Ashna. Welcome. Hello, everyone. And thank you for this introduction. I am Stanley Junior Bernard. I am from Haiti and I am also part of the DIGRA community. And it’s an excellent opportunity for me to be here present at the IGF 2023, even if I’m not present physically. But I think being part of it online is an amazing thing. And also today is the best day for me because it’s my birthday and I have the opportunity to talk about digital grassroots, how that community has impacted my life. Since I’ve joined the digital grassroots in 2019, I think this is the first time I’ve met things related to Internet governance. And that has delved me into Internet governance. I joined the Internet Society and took many courses online with the Internet Society that helped me build also my knowledge and my skills on infinite issues. And I could say now that the digital grassroots was one of the best things that could happen to me because that has played a significant role in shaping my understanding on digital rights and Internet governance. And that has provided me with the tools and knowledge that I needed to succeed in the digital world. Because nowadays, Internet, the technologies are the new. new trend, and people in my country doesn’t really have access to technology, to Internet, to connectivity. And even now, I still struggle to go online because of Internet connectivity. And I think the Internet should be open, free, accessible to everyone, not to only the North country, but also to global South. People should benefit from opportunities that are online. And I can say, through Digital Grassroots, I was awarded a scholarship of the One Young World this year. And I think this is one of the things that made the impact of Digital Grassroots in my life, because through Digital Grassroots, my work and ideas have been recognized, and I was granted a scholarship from the One Young World, one of the global events in the world. So, I would say that—I’m sorry. I would say that I believe that Digital Grassroots has an important role to play in building digital capacity for marginalized youth around the world. Because of its innovative program, I would say that the Digital Rights Learning Exchange was one of the best programs that I’ve ever attended that is based on digital rights, on digital activism, on digital advocacy, because we need this kind of training to reinforce the capacity of young people from the global South. So, I would encourage everybody to support the work of Digital Grassroots, because the work that they are doing is impeccable. Thank you.

Hanna Pishchyk:
Thank you so much for sharing, Stanley. I’m not sure if it’s appropriate space and place to sing happy birthday collectively, but happy birthday to you. I hope you’re going to have a wonderful day. And, yeah, last but not least, we have Grace Zawuki from Zimbabwe. Embarking on her leadership journey with DIGRA, Grace has forged her path in the community from a learner to a mentor and advocate, exemplifying DIGRA’s values of community elevation and knowledge translation. And her efforts at the Zimbabwe Information and Technology Empowerment Trust have been instrumental in embedding digital rights and literacy within local framework. Her translation of capacity building skills and DIGRA knowledge translated into actionable initiatives, not only uplifting her community and acquiring crucial digital literacy skills, but also has been playing a crucial role in the learning experience of DIGRA newcomers, she has been supporting our learners, creating a repo of empowered digital advocacy and literacy across our DIGRA network. Grace, the floor is yours.

Grace Zawuki:
Thank you so much. Can you hear me? Yes, we can. Hi, everyone. Yes, my name is Grace and I’m from Zimbabwe. And I’m so pleased to be part of this event to launch the impact report. My journey with DIGRA started in 2022 when I participated in the Digital Rights Learning Exchange. And I can actually openly say that that was more like an eye-opener and not only an eye-opener, but it propelled me in my leadership journey in the internet and digital rights landscape. Because soon after participating in the Digital Rights Learning Exchange, I actually got spotlighted by the opportunity and I could have never had an opportunity to be part of this prestigious community solutions program. So currently, I’m in the United States, still carrying on the same issue, working on the same issue when we’re looking at digital literacy and also increasing digital safety and digital rights awareness amongst our communities. So, well, yeah, we are really unstoppable together. And through DIGRA, I actually learned that instead of looking for what’s wrong in any other situation, we should look at what we are strong at. And if we maximize on that, we can continue to have impact in our communities. So, yeah, I’m actually happy to be part of this community and I would like to continue to be part of the DIGRA community. Based on the work that we are doing, it’s actually similar work. And working together can actually make us have more impact in all our communities. And we are also putting Africa on the spotlight. So thank you so much, DIGRA, and I’m so happy that you invited me to be part of this event. Thank you.

Hanna Pishchyk:
Thank you very much, everyone. And just before we go, I would like to just say a big thank you to everyone. And I don’t know if any other member of the team would just like to say a few words before we pass on back to Rashad.

Rachad Sanoussi:
I think Estelle would like to say something. I don’t know.

Estelle:
I just wanted to say big congratulations to all of you. This impact report would not have been possible without your hard work and also the dedication. When we started DIGRA programs, just through volunteering and collaborating, it was really in the hope that we can create new leaders, new young leaders from our side of the world so that we are more represented in the internet governance ecosystem. I’m just so proud to see what you’re all doing and the good success you’ve achieved. And just huge congratulations. We are very proud of you. And thank you for being part of our community. Thank you, Rashad.

Rachad Sanoussi:
Okay. Thank you so much, everyone. As we are going to the end of this session, I would like to express my gratitude to each and every one of you for your active engagement and also your contribution. So in the coming month and coming year, we hope to build upon this foundation and go forward. Together, we are truly unstoppable together. And I look forward to our continued journey together. Thank you. And until we meet again, keep the digital grassroots movement alive. Thank you. Thank you, Rashad. Thank you, everyone. I’m sorry. Aren’t there any questions? Yes, yes. You can ask questions. You can use this mic to ask your question. Yeah.

Audience:
Good morning, everyone. My name is James. I’m from Cameroon. So I want to thank you for your well-articulated presentation and for your report. But going through the presentations and from all the guests, you know, speaking, I realized one similarity. They were predominantly from, you know, say, English parts of Africa and, you know, other African countries. And the IGF is really struggling to promote multilingual, you know, environment. So I come from Cameroon, for example. Today is the very first time I hear about this lofty project. So what – firstly, what are the conditions to become an ambassador? And the second question is, what are people doing to get into other countries which do not express themselves in English? Thank you very much.

Rachad Sanoussi:
Okay. Thank you so much for your question. I will give you a little answer, and my colleague also will help me. So, you know, as I was saying, I am Rashad Tanusi, and I am from Benin. And you know, Benin is also a French-speaking country like Cameroon. So my journey also in digital grassroots started in 2019, where I attended IGF, like you are attending now, in Berlin, and I met digital grassroots in Ubud. So it’s where I hear about digital grassroots, and I decided to join one of the programs, which is the community leadership training. So I joined this program, and over the year, I learned a lot. And after now, I joined the team. So even I am not English-speaking from English country, I was able to learn through this journey together. So I learned a lot. And I think our program is also open for everyone, even you are not from English country. I have a lot of ambassadors from Benin as well, who joined our program. But I will let my colleague to give more answers. So, Ufa, would you like to comment?

Uffa Modey:
Yes, thank you very much, Rashad. And like you’ve already said, digital grassroots, we do a lot of work in the Francophone-speaking countries in Africa. And we have a couple of applications when we are running our ambassadors program. Admittedly, the language barrier in Internet governance is an issue. So we’ve historically run a specific cohort of training for Francophone-speaking countries, where we try to run the entire training program on Internet governance and digital rights in French. The whole program is delivered in French. You can engage with us. Talk to Rashad after this session to visit our website, learn more about our work with our reports, to stay in touch with us, join our mailing list and see how you can be part of our community. We are 100% open and always looking for new ways to innovate around engaging more people in Internet governance.

Rachad Sanoussi:
Thank you, Ufa. We can engage further after. Thank you. I don’t know if you have questions also online. No? So thank you all for joining us. And it’s really great to have you all. So have a good day. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye.

Audience

Speech speed

141 words per minute

Speech length

141 words

Speech time

60 secs

Estelle

Speech speed

132 words per minute

Speech length

117 words

Speech time

53 secs

Grace Zawuki

Speech speed

139 words per minute

Speech length

313 words

Speech time

135 secs

Hanna Pishchyk

Speech speed

143 words per minute

Speech length

757 words

Speech time

317 secs

Nancy Wachira

Speech speed

171 words per minute

Speech length

325 words

Speech time

114 secs

Rachad Sanoussi

Speech speed

128 words per minute

Speech length

737 words

Speech time

347 secs

Stanley Junior Bernard

Speech speed

152 words per minute

Speech length

519 words

Speech time

205 secs

Uffa Modey

Speech speed

162 words per minute

Speech length

1230 words

Speech time

455 secs

Robot symbiosis café | IGF 2023 WS #95

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Hiroaki Kotaka

Hiroaki Kotaka is a well-known advocate for the use of technology, particularly robotics, in various industries. He is particularly focused on developing the manufacturing and processing industry in Kyoto. Kotaka firmly believes that embracing technological advancements, such as robots, can lead to significant growth and innovation in the industry.

One area where Kotaka sees the potential of robotics is in the service industry, including assisting both able-bodied individuals and those with disabilities. To explore this idea further, he initiated the Robot Symbiotic Cafe Initiative. The initiative involves testing remote customer services and deploying robots in actual cafes to serve individuals with disabilities, demonstrating how robots can improve service delivery and inclusivity.

Kotaka has also been striving to provide work opportunities for individuals with disabilities through the use of robotics. He initiated discussions with Mr. Inoue, which led to the start of the Robot Symbiotic Cafe project. This project brings together researchers and executive managers to discuss the use of robots in customer service and communication at cafes, with the goal of creating meaningful employment for individuals with disabilities.

To ensure that robot technology is accessible to all, Kotaka collaborated with the partner company Kegan to customize existing service food robots. This partnership aims to find suitable solutions that cater to the diverse needs of individuals with disabilities. Through ongoing discussions and efforts, Kotaka and Kegan are working towards creating truly inclusive and accessible technology.

Collaboration plays a vital role in developing warm and personable robots that allow individuals to express their uniqueness. Kotaka advocates for partnerships with various stakeholders, including the Department of Welfare, city administration, and legislative corporations. By broadening these partnerships, Kotaka hopes to foster a collaborative environment that encourages the development of innovative and inclusive robot technologies.

Kotaka also emphasizes the importance of publicizing these initiatives among businesses. He believes that highlighting the benefits and opportunities associated with robot integration will encourage more companies to embrace these technologies. The Robot Symbiotic Cafe Initiative serves as an excellent example of how robots can enhance job satisfaction and meaning in the lives of pilots, further supporting Kotaka’s argument.

In conclusion, Hiroaki Kotaka is a strong advocate for the use of robotics in various industries. He believes that leveraging technological advancements, particularly in the manufacturing and service sectors, can lead to significant growth and inclusivity in Kyoto. Through initiatives like the Robot Symbiotic Cafe and collaborations with stakeholders, Kotaka aims to create accessible and meaningful work opportunities for individuals with disabilities. Overall, he remains committed to supporting the development and integration of robots in different industries.

Audience

During a panel discussion, a representative from Benin raised the question of the cost of developing a robot and sought advice on reducing these costs. In response, the representative from Kagan Inc acknowledged the complexity of quantifying the cost of robot development, explaining that the process typically takes three to five years.

The representative from Kagan Inc suggested that one way to reduce costs is by using simpler and less complex mechanisms in the development of robots. By simplifying the design, it becomes easier to manufacture, ultimately reducing the overall cost. The representative highlighted the importance of the start-up element in bringing down manufacturing costs. Start-ups often have innovative and efficient methods that help streamline production processes and decrease expenses.

Additionally, the representative emphasized that reducing complexity is crucial to achieving cost reduction. Complex mechanisms not only increase costs due to the need for intricate manufacturing processes but also require more time and resources during the development phase. By keeping the mechanisms simple, the manufacturing process becomes more straightforward and less costly.

The panel discussion provided valuable insights into the cost aspects of robot development. It highlighted the challenges in quantifying these costs due to the lengthy development process. Furthermore, it emphasized the significance of simplifying mechanisms and leveraging start-up elements to decrease manufacturing costs.

In conclusion, the session shed light on the high level of effort and time investment required to develop a robot. It underlined the importance of considering cost reduction strategies, such as using simpler mechanisms and taking advantage of the innovative methods employed by start-ups. These insights can guide future efforts in robot development, promoting more affordable and accessible technology in this field.

Manabu Inoue

Manabu Inoue is a strong advocate for promoting opportunities and inclusivity for individuals with disabilities. He believes that robots can play a crucial role in improving their lives, both in terms of communication and work opportunities.

One of Inoue’s key beliefs is that individuals with severe disabilities should be able to operate robots for communication. He observed that individuals with communication and cognitive impairments faced difficulty when using a robot-assisted customer service at a cafe. This led him to reach out to local companies to discuss the possibility of developing a robot specifically tailored to suit the needs of individuals with severe disabilities.

However, there are skeptics who doubt the feasibility of developing such robots. Inoue himself expressed doubt in the feasibility, as he found no evidence of companies already developing robots that met the specific requests. Despite this skepticism, Inoue remains committed to customizing robots to be simple and easy to operate, thus making them suitable for individuals with limited hand dexterity.

Inoue also recognizes the importance of collaboration with disability support organizations and schools. He aims to expand on supported services by partnering with these organizations and sparking a change in awareness of what can be achieved with robotics. By collaborating with these entities, Inoue hopes to create more opportunities for individuals with disabilities and provide them with a sense of pride and confidence in their work.

The sentiment surrounding Inoue’s vision and efforts is overwhelmingly positive. Pilots who have had the opportunity to operate the robots have expressed great joy and a desire to actively participate in society. Inoue’s goal is to empower individuals with disabilities, especially those with severe disabilities, by helping them obtain employment and gain a sense of achievement.

In conclusion, Manabu Inoue believes in the potential of robots to transform the lives of individuals with disabilities. Through customization and collaboration with disability support organizations and schools, he aims to create more opportunities and inclusivity. The positive sentiment from individuals who have experienced the benefits of robotic assistance further emphasizes the importance of these efforts. Ultimately, Inoue’s goal is to enable individuals with disabilities to gain confidence, pride, and employment opportunities through the use of robotics.

Leila Liza Dailly

Kagan Inc. is a startup company that was founded in Kyoto Prefecture in 2016. The company’s team consists of members not only from Japan but also from the US, Europe, and Asia, bringing together expertise from major electronics manufacturers. Kagan Inc. focuses on the development, manufacturing, and sales of robotics, with a particular emphasis on customizability to meet user demands.

A key product offered by Kagan Inc. is the Kagan Motors, which simplifies the process of creating robots. The motors have received positive feedback for their ability to streamline robot construction. Additionally, the company has launched the Kagan ALI Autonomous Robot, which is widely used in various settings such as factories, warehouses, and restaurants. The versatility of Kagan Inc.’s robots allows them to be tailored to specific needs.

The company recognizes the wide applicability of robotics in different sectors. Their robots have been successfully implemented in factories, warehouses, and restaurants, showcasing their flexibility. Kagan Inc. highlights the importance of user-centered design and interfaces, implementing foot pedals as the main interface for individuals with limited hand dexterity. Feedback from users is actively collected and used to improve the user interface, and pilots are extensively trained to maneuver the robots.

In addition to their focus on robotics, Kagan Inc. specializes in customizing robots to suit customers’ needs. By minimizing basic functions, the company ensures that their robots are perfectly tailored to each customer’s requirements. Furthermore, Kagan Inc. aims to utilize existing business estates to address individual needs and support job procurement, contributing to economic growth.

Overall, Kagan Inc. is a pioneering startup that prioritizes the development and customization of robotics. Their Kagan Motors and versatile Kagan ALI Autonomous Robot showcase their innovative and highly customizable products. With a strong emphasis on user needs and the utilization of existing resources, Kagan Inc. strives to contribute to both individual and societal growth.

Moderator

Hiroaki Kotaka, a prominent figure in the field of robotic technology, approached Kegan, a company specialising in service food robots, to customise their robots for implementation in the Robot Symbiotic Cafe. This partnership aimed to enhance the functionality and efficiency of the robots specifically for use in this unique cafe setting. The collaboration between Kotaka and Kegan was met with a positive sentiment, as the moderator of an event invited Kotaka to demonstrate the usage of these robots in the Robot Symbiotic Cafe.

During the demonstration, Leila Liza Dailly showcased the capabilities of a robot operated by an employee at the company. This provided a hands-on experience for the audience, highlighting the practicality and usefulness of these robots in real-world scenarios. The demonstration generated a neutral sentiment, with the moderator expressing interest in continuing the demonstration.

One notable aspect of the robots’ operation is the use of foot pedals instead of a keyboard for control. This decision was made to simplify the piloting process and make it more intuitive for the operators. This innovative approach not only reduces costs but also improves user experience and accessibility. Furthermore, the company actively seeks input from individuals with disabilities to ensure that the operation of the robots is accommodating and convenient for everyone.

While training pilots to manoeuvre the robots was appreciated, it was observed that this process led to exhaustion among the pilots. This highlights the importance of striking a balance between providing adequate training and preventing fatigue to optimise the performance and well-being of the operators.

A key strength of Kegan lies in their expertise and ability to customise robots to suit individual needs. This bespoke approach ensures that the robots can effectively cater to the specific requirements of different environments and users. Additionally, to reduce development costs, the company leveraged existing food serving robots, demonstrating a cost-effective and efficient approach to innovation.

During the event, a speaker from a robot manufacturing and development company shared their expertise, citing a development timeframe of three to five years for creating a robot. This insight offers a realistic perspective on the time and effort required for the successful development and implementation of robust robotic systems.

Furthermore, the speaker emphasised the importance of simplicity in technology, particularly in reducing costs. Keeping technology straightforward and streamlined not only facilitates cost reduction but also enhances usability and maintenance.

In conclusion, the partnership between Hiroaki Kotaka and Kegan aims to enhance the functionality of service food robots for implementation in the Robot Symbiotic Cafe. The use of foot pedals for control, customisation of robots to suit individual needs, and consideration for disabled users demonstrate the company’s commitment to innovation and accessibility. Further insights from experts highlight the dedication required for successful robotic development and the benefits of simplicity in technology.

Session transcript

Hiroaki Kotaka:
to be able to work out of a consultation with the Kyoto-based robotic companies and the prefecture of Kyoto by operating a robot through the internet. Thank you, Mr. Uenobue. Next, I request Mr. Kotaka from the Kyoto prefecture in the Department of Commerce, Labor and Tourism, Manufacturing Promotion and Division. My name is Kotaka. I am the Manufacturing Promotion Division of the Department of Commerce, Labor and Tourism in Kyoto prefecture. First, I would like to briefly introduce the robot initiatives in Kyoto prefecture. The park division supports SME and the manufacturing and processing industry in the prefecture, as well as content companies such as games, videos, and startup companies. It promotes robots, one of the cutting-edge technologies, and Japan used to be known as one of the world’s leading robot-producing countries. However, with foreign competitors have emerged in recent years in Japan, so no longer hold the number one position. To recover, we set up the Keihana Robotic Engineering Center in 2019 to reclaim the position, which supports the development of next-generation technology, and the entry of small and medium-sized enterprises and startups in the prefecture into the robots industry. Over 720 research and development projects and demonstration tests have been conducted at the Robotic Engineering Center. The number of companies that have reached in the social implementation stage and conducting field demonstrations in various locations within the prefecture. Under the Robot Symbiotic Cafe Initiative, we are conducting demonstrations of remote customer services and serving individuals with disabilities in actual cafes, thereby aiming to create a place where humans and robots coexist and work together in harmony. So finally, I would like to introduce Ms. Rayla Daly from Kagan Inc. Thank you. My name is Rayla Daly. I am from Kagan Inc.

Leila Liza Dailly:
Nice to meet you all. Our company is a startup founded in Kyoto Prefecture in 2016. Our mission is quick and easy robot for everyone. And we have members not only from Japan, but also from the United States, Europe, and Asia. Most of our personnel are from major electronics manufacturers, and we conduct development, manufacturing, and sales. At the start of our entrepreneurial journey, we developed Kagan Motors, which makes it astonishingly easy to create robots. We have received favorable feedback from customers across universities and R&D fields. So we began offering motorized robots such as conveyors, rollers, and AVGs to respond to requests for use on factory production lines. In 2022, we launched Kagan ALI Autonomous Robot widely used in factories, warehouses, restaurants, etc. Customization is the key feature of which gives us the flexibility to meet user demands such as transporting items and fulfilling communication roles. Well, thank you, Rayla. So all of you who are working, the Robot Symbiotic Cafe, tell me how this got started.

Hiroaki Kotaka:
Let’s start with Mr. Kotaka. Mr. Inoue consulted with me over the phone regarding the possibility of individuals with disabilities working remotely from home by operating robots through the Internet. At the same period, we conducted a panel discussion called Keihana Residence to expand on the network of acquaintances of researchers and business professionals in Keihana Science City. Increasingly, researchers working on robotics and executive managers’ rehabilitation-related facilities We discussed excitedly about the possibility of using robots to assist customer services and communication at cafes. With all these factors coming together, I felt that I had to do something about the conversation with Mr. Inoue. Thus, I started the project Robot Symbiotic Cafe. So what exactly did Mr. Inoue consult with you, Mr. Kotaka?

Manabu Inoue:
So last year, I visited a cafe where a robot-assisted customer service through a remote operation was already being implemented. When I looked at it, the communication and cognitive-impaired individuals had difficulty using those robots, and I was not able to imagine an individual with a severe disability operating those robots. Therefore, I reached out to the local companies collaborating with us on a regular basis and discussed the development of the robot that I had in mind at the time. And at the time, I was told about the Robotics Engineering Center, and I called them promptly to discuss the prospect of enabling it. As a member of the inquiry, I doubted the feasibility as no companies already developed the robots matching the requests.

Hiroaki Kotaka:
So I decided to approach Kegan as a partner company of the Robot Technology Center because we were holding a seminar on robots becoming a way of life, and they agreed to my request by customizing the existing service food robots so that I could find a suitable solution for the equipment.

Moderator:
So I was able to match the two. We brought the actual robots used for the Robot Symbiotic Cafe. Would you like to demonstrate?

Leila Liza Dailly:
So let me show you the actual robots. So the person who built the robot is at home right now, and today we are operating by the team personnel out of our company. So you call them as pilots, right? Today, so one of your employees at your company is going to be the pilot. So we can’t really handle food and drinks at this venue, so we will be carrying pamphlets. See how that works. Would you like to come up to the table? Because, you know, nobody uses chairs. Thank you for the demonstration of the robot.

Moderator:
Please continue to operate. So you did the demonstration implementation in February.

Hiroaki Kotaka:
So let me know how it went.

Manabu Inoue:
So this was done by the individual who has a severe and mental physical disability, who needs constant nursing care, and those who need daily medical treatments for serious illnesses, and those in so-called social withdrawal states, having difficulties stepping out of home, and those individually still wish to work from home. And allowing them to work by allowing them to work from home by operating a robot, that is the prospect of what we wanted to achieve, Symbiotic Cafe. In the realm of supporting for those that are challenged with disabilities, this is a completely new thing. We want more and more people from the organization supporting individuals with disabilities to become aware that individuals with disabilities can work by remotely operating robots as it may open up new possibilities to new employment. So how did you choose the people who will be doing the demonstration test? For the pilot project, we consulted with the organization supporting individuals facing social withdrawal in the local community. The actual pilot is an individual who left computers and the Internet, so expressed a strong desire to participate. Tell us about the development of these robots. I personally don’t know much about robots, but I request that it will be possible to operate the robot remotely from home. And I requested that some of the pilots have limited hand dexterity. It should be made easy, so I requested operation to make it as simple as possible.

Leila Liza Dailly:
And so we decided that we use foot pedals instead of like using keyboard. So it’s easier for the pilot to operate, and we made it easier. Today, we brought the foot pedal that we used at that time. We use it as a single table. So this is a foot pedal. This pedal that I’m showing you right now, that’s how they operate the robot. So our employee went to the home of those who are going to be the pilot, and we directly asked the questions what would be easier for them to operate. So I assume that people with a disability have different kinds of disability that are distinctly different. So could you give us an example of what are their difficulties in developing? Yes. So they went to the training, and they were very happy when they received training in maneuvering the robots. We tried to improve the user interface. They got exhausted by operating them. So the challenge for the future is how meticulously can we address their needs. I have previously heard that the startups can handle requirements of flexibility that large companies cannot afford to do so. Could you please elaborate on this? So our company is specializing in providing customers with the most suitable robots by minimizing basic functions and customizing the robots. We are good at it. I believe that the disability of the pilots can be diverse. The development cost will be high if you have to develop from scratch. What did you do to reduce the development cost? We used existing serving food robot, where what are the things that are used to serve food.

Moderator:
So we kind of appropriated those existing robots in adapting the pedal that is readily available in the market. I know that you will be continuing this initiative in the future. So how are you going to improve and continue with this project?

Hiroaki Kotaka:
The most important aspect is clearly defining what kind of robot that are going to be manufactured. Individuals with disability who will be pilots. Also, as pilots become accustomed to maneuvering the robot, they can make more tasks, or they can address the needs rapidly. So we want to evolve the robot by talking to individuals with disabilities, defining requirements for the kind of system that is best for them. And I want to be able to evolve the robots according to their needs. What are the key points of the demonstration? So the customization and demonstration and individuality are the key points. So in regards to customization, making improvements to a finished product is time consuming and can be expensive. So the time and cost can be reduced by combining existing technology. Next, regarding the demonstration on individuality, full automation using a robot will require time and cost for development. However, by skillfully combining human-operated and robot-operated possibility, where robots complement what humans cannot do, and humans in turn complement action that robots struggle with. Through this kind of initiative, we want to collaborate with everyone in creating warm and personable robots that allow individuals to express their uniqueness.

Manabu Inoue:
So that means that humans and robots should coexist by demonstrating their individuality instead of relying solely on robots for everything. What are your thoughts on this, Mr. Inoue? The pilots express the great joy about their experience in being able to operate the robot, and I want to try more customer interactions. And those who support individuals in a state of social withdrawal were pleasantly surprised that these individuals expressed a desire to participate actively, which brought them an immense happiness. And I hope that through a demonstration test in the future, individuals with disability will not only operate robots, but will also interact with people with robots, participating in the society to lead their lives, working, and earning a salary by themselves, and society to be able to accept this as its norm.

Leila Liza Dailly:
Could you share your thoughts on this, Reita? So our company has been active in the role of manufacturing robots up to now, and engineers often tend to focus solely on manufacturing robots using cutting-edge technologies. However, the Robot Symbiotic Cafe Initiative, this gave us an opportunity to think about how to design a robot that can help pilots enhance their purpose in their lives and meaningful job satisfaction. So finally, what are the prospects for the future in this project?

Hiroaki Kotaka:
Mr. Kotaka? So as the department in the Kyoto Free Factory, we wish to continue to support development of robots. And I think it’s important that we develop more partners, not just in the Department of Welfare, but also in the city and other administration and legislation corporations. And through these corporations, I hope to be more connected and collaborative with the world. And I would like more businesses in our prefecture, one of which has to do with publicity of these initiatives among the businesses.

Leila Liza Dailly:
Ms. Reina, our company have this existing business estates. And in the future, we want to be able to address the needs of the individuals by customizing and help be able to obtain a job. Not just help with the efficiency of the product, but to be able to help every individual to contribute to the society.

Hiroaki Kotaka:
Final remarks, Mr. Inoue?

Manabu Inoue:
So personally, helping a person with disabilities, severe disability, to obtain some kind of employment and job, that is what I hope to do. So those with severe disability, we are customizing these robots. So we are examining the feasibility if we can customize these for disabilities. And we want to be able to continue to develop talents who can be the pilots. And we will be collaborating with the schools that are helping the people with disability. Through using robots, I hope the people with disability can have this confidence and pride in their work and their life to live better and to be able to cooperate with various stakeholders in supporting those with disabilities. And to our prospective partners, as the prefecture of Kyoto has mentioned, I would like to talk to other organizations who supports the people with disability so that we can expand on supporting them in the future. And I think it’s important for them to witness that this is something that can be achieved and it will change their awareness.

Moderator:
Thank you, all of the panelists. So we would like to move on to the question and answer sessions. Is there any session that are asking questions of the chat? So it seems that there is no one on the chat. But if you have any questions in the floor.

Audience:
Yeah. My name, I am from Benin. So I would like to ask you about the development. How much was the cost to develop the robot? I am doing a research on robot development. But I am aware that it can be expensive. So if there is good advice for us to be able to reduce costs for development. So this is a technical question. Could you talk for us? So I am from the Kagan Inc, who is doing the robot manufacturing and robots development. About the cost of the question, so it took about three to five years in terms of the duration of the development. There has been many years to develop the robot. So I cannot give you a single answer about the cost. We made effort in reducing the cost. I was checking with the interpreter if they needed time for consecutive translation, but it seems to be OK. So the startups is something, is an element that is in the space. Because using something that is as simple as possible, not to reduce the cost. So that is the way I think that is a way to enable social implementation by reducing complexity and keeping it simple. I hope that answers your questions. Is there any other questions on the floor or over the chat?

Moderator:
So I would like to wrap up the session. Thank you so much for joining the session. You’re welcome.

Hiroaki Kotaka

Speech speed

138 words per minute

Speech length

851 words

Speech time

369 secs

Audience

Speech speed

131 words per minute

Speech length

251 words

Speech time

115 secs

Leila Liza Dailly

Speech speed

130 words per minute

Speech length

754 words

Speech time

347 secs

Manabu Inoue

Speech speed

132 words per minute

Speech length

753 words

Speech time

342 secs

Moderator

Speech speed

128 words per minute

Speech length

167 words

Speech time

79 secs

Radical Imaginings-Fellowships for NextGen digital activists | IGF 2023 Networking Session #80

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Alice Lanna

Alice Lanna, a mentor for the Brazilian youth group in IGF (Internet Governance Forum), emphasises the significance of youth participation in relevant discussions. She firmly believes that young people should not only be the subjects of discussions but must also actively contribute to them. Lanna is passionate about engaging young individuals and ensuring that they have a voice and agency in shaping decisions that affect them.

Lanna strongly supports youth involvement and active participation in discussions. She actively engages in activities that foster youth participation, showing her dedication to empowering young people and amplifying their voices. For instance, she mentors the Brazilian youth group within IGF, providing guidance and support to ensure that their perspectives are heard and valued.

Furthermore, Alice Lanna advocates for the opinions of the people who are funded to be given proper consideration in funding processes. She argues that the views and input of the person being funded should not be disregarded, but rather acknowledged and integrated into the decision-making process. Lanna believes that the person being funded should play a meaningful role in the design and implementation of the process, rather than being seen as a mere tool to execute predetermined plans. By involving the funded person in decision-making, she believes that better results can be achieved, potentially surpassing the original expectations of the funder.

Additionally, Lanna stresses the importance of striking a balance between mentoring and trust in the funding process. While she recognises the value of having mentors or guidance in the funding process, she also emphasises the need for flexibility and understanding in terms of trust. Lanna believes that the person or organisation being funded carries valuable experiences and knowledge that can contribute to the process. It is not just a one-way learning process, but also an opportunity for the funded person to contribute by sharing their insights and expertise.

In conclusion, Alice Lanna’s main arguments highlight the importance of youth participation, the need for their active involvement in discussions, and the significance of valuing the opinions and involvement of the funded person in funding processes. She supports a balanced approach that combines mentoring and trust to ensure a more inclusive and effective decision-making process. Lanna’s advocacy for youth empowerment and her insights into funding processes contribute to fostering a more participatory and equitable society.

Audience

During the conversation, the speaker showed great enthusiasm in hosting webinars on a specific topic. They agreed wholeheartedly to participate in this endeavor and assured the listener that they would comply with any instructions given. However, the speaker also expressed some confusion about certain aspects of the topic.

In addition to their eagerness to host webinars, the speaker also emphasized their interest in maintaining close ties with the individuals mentioned. This highlights their desire for continued interaction and collaboration, suggesting a high level of importance and interest in maintaining these connections.

Throughout the conversation, the speaker’s tone seemed somewhat resigned, as they admitted to being unsure about the situation and appeared to be primarily following orders rather than relying on personal knowledge or expertise.

In conclusion, the key points discussed in the conversation were the speaker’s willingness to host webinars, their confusion about certain aspects, and their interest in maintaining connections with relevant individuals.

Anita Gurumurthy

Anita Gurumurthy, an advocate for digital rights, emphasizes the importance of accessibility in digital rights debates, particularly for under-resourced regions and organizations. She highlights the need for collaboration among different spaces to co-design fellowships that cater to the needs of future activists and scholars. Anita also encourages participants to fill in a short survey, with the added incentive that those who provide their email IDs will receive the survey’s analysis.

Anita argues that understanding digital rights is crucial for social movements, as digitalization continues to reshape various issues. She points out the new challenges faced by social movements, such as health data, ed-tech technologies, trade agreements, and algorithmic non-transparency. By promoting a deeper understanding of digital rights, Anita aims to empower social movements to effectively address these challenges.

However, Anita is critical of existing fellowships, expressing concerns that they often prioritize individual experiences over contributing to institutional strengthening within social movements. She advocates for inclusive digital rights fellowships that support and enhance social movements.

In contrast, Anita supports the idea of inclusive digital rights fellowships that bridge gaps and promote collaboration among various stakeholders. She cites successful fellowship programs conducted by IT for Change, which not only provided valuable opportunities to fellows but also led to reshaping development programs.

Overall, Anita Gurumurthy advocates for greater accessibility in digital rights debates, collaboration among diverse spaces, and the establishment of inclusive digital rights fellowships that contribute to the development and strengthening of social movements.

Hélène Molinier

Hélène plays a key role in managing the Action Coalition on Tech Innovation for Gender Equality at UN Women. This coalition aims to diversify the digital cooperation stage, bringing new voices and perspectives to the forefront. It focuses on using tech innovation to foster greater gender equality.

The coalition’s main objective is to advance SDG5: Gender Equality, striving for equal rights and opportunities for all genders. It also aligns with SDG10: Reduced Inequalities, which tackles various forms of inequality, including those based on gender.

The argument put forth by the coalition emphasizes the critical role of technology in promoting gender equality. Through tech innovation, it is possible to create new opportunities and address the existing gender disparities prevalent in many sectors.

Hélène’s leadership in managing this coalition underscores the commitment to using technology as a catalyst for gender equality. Her involvement indicates a positive sentiment towards empowering women and promoting gender equality through technology.

The coalition’s argument is supported by evidence such as research and case studies showcasing the potential impact of tech innovation in addressing gender disparities. It highlights successful initiatives that have bridged the gender gap in sectors like education, employment, and access to resources.

In conclusion, Hélène’s management of the Action Coalition on Tech Innovation for Gender Equality reflects a global interest in diversifying the digital cooperation stage and promoting gender equality through tech innovation. The initiative aligns with SDG5: Gender Equality and SDG10: Reduced Inequalities, demonstrating a commitment to addressing existing gender disparities. The overall sentiment towards using technology for gender equality is positive, recognizing its potential to create new opportunities and empower women worldwide.

Christian Leon

Christian Leon, hailing from Bolivia, currently holds the esteemed position of Executive Director at the Internet Bolivia Foundation. In addition, he also serves as the Secretary of Al Sur, a coalition comprising 11 civil society organizations that collectively strive to promote digital rights throughout Latin America. Christian is widely recognized and respected for his unwavering dedication to advocating for internet freedom and safeguarding digital rights.

As the Executive Director of the Internet Bolivia Foundation, Christian plays a vital role in spearheading initiatives aimed at bridging the digital divide and ensuring equal access to information and technology in Bolivia. The foundation undertakes projects and campaigns to empower individuals and communities, equipping them with the necessary tools and knowledge needed to actively partake in the digital era.

Furthermore, Christian’s position as the Secretary of Al Sur demonstrates his unwavering commitment to promoting digital rights at a broader scale. Through collaboration with various civil society organizations across Latin America, he fosters unity in advocating for policy and regulatory measures that protect and enhance digital rights for all citizens.

Christian’s portrayal as a learner further accentuates his dedication to continuous personal growth and knowledge exchange. He displays a genuine willingness to learn from others while also offering his own expertise and insights to the wider community. This openness, combined with his extensive experience in the field of digital rights, positions him as a valuable resource for discussions and initiatives pertaining to internet and digital rights across Latin America.

In conclusion, Christian Leon’s roles as the Executive Director of the Internet Bolivia Foundation and Secretary of Al Sur highlight his wealth of knowledge and experience in advancing digital rights in Latin America. His unwavering commitment to internet freedom, bridging the digital divide, and advocating for policies that protect digital rights exemplifies his devotion to creating a more inclusive and equitable digital landscape.

Barbara Leodora

Barbara Leodora, a representative from Article 19 based in Brazil, is spearheading a campaign that aims to provide fellowships for popular communicators. This initiative was developed during the pandemic, demonstrating the organization’s adaptability and commitment to addressing emerging challenges. The campaign has successfully conducted two editions, one in 2020 and another in 2001, highlighting its longevity and impact.

The primary objective of the fellowship program is to empower popular communicators who play a vital role in keeping the public well-informed. Barbara Leodora emphasises the significance of using popular communication as a means to provide knowledge and information to a broader audience. This approach is particularly crucial during times of crisis, with a specific focus on disseminating accurate and timely information about the pandemic.

Furthermore, Barbara Leodora’s dedication extends beyond communication. She is deeply committed to promoting and safeguarding democratic processes. This commitment was evident in the program’s previous edition, which specifically targeted elections. Barbara stressed the importance of ensuring that the elections proceeded smoothly, further underscoring the campaign’s overall goal of fostering democratic values and practices.

Regarding funding and resource allocation, the campaign demonstrates flexibility and trust-building. In response to the challenges posed by the pandemic in 2020, Article 19 Brazil made adjustments that allowed beneficiaries to have more autonomy in determining how they use the funds. This inclusive and flexible approach not only enhanced trust between Article 19 Brazil and the fellows but also showcased a genuine respect for the fellows’ identities and autonomy.

Capacity building and community network building lie at the heart of the campaign’s fellowship program. By offering comprehensive courses and workshops, the program equips fellows with the necessary skills and knowledge to effectively engage with their audiences. The establishment of community networks through platforms like WhatsApp groups further encourages collaboration and the sharing of valuable insights among fellows. Importantly, even beyond the fellowship program, the campaign ensures ongoing engagement with the fellows, enabling sustained support and growth in their work.

Lastly, continuous improvement is prioritized within the fellowship program. Feedback from fellows is highly valued as it contributes to enhancing future programs and initiatives. Additionally, mutual learning is actively encouraged, recognizing the value of sharing knowledge and experiences among participants. This commitment to continuous learning fosters an environment of growth and helps the campaign remain responsive to the evolving dynamics of popular communication.

In conclusion, Barbara Leodora’s leadership in Article 19 Brazil’s campaign for fellowships for popular communicators exemplifies a comprehensive and multi-faceted approach to communication, democratic engagement, and resource allocation. By empowering popular communicators, providing knowledge and information, and promoting democratic processes, the campaign contributes to reducing inequalities and promoting quality education. The focus on flexibility in resource allocation, capacity building, community network building, and continuous improvement all contribute to the campaign’s overall effectiveness and long-term impact.

Arielle McGee

In this analysis, three speakers from Internews are examined, shedding light on their areas of focus and involvement. The first speaker, Arielle McGee, is identified as a program officer responsible for Internews’ Asia region. Her primary work revolves around collaborating with women and youth-led civil society organizations. Although no specific projects or initiatives are mentioned in the analysis concerning Arielle, her involvement with these organizations indicates a focus on women empowerment and youth-led initiatives, aligning with Sustainable Development Goal (SDG) 5 for gender equality and SDG 8 for decent work and economic growth.

The second speaker, whose name is not mentioned, is associated with an upcoming project at Internews. This project pertains to human rights and internet governance, which implies a commitment to promoting and protecting human rights principles in the context of the digital realm. By engaging in this project, Internews aims to contribute to reducing inequalities, as indicated by its relevance to SDG 10.

The third speaker’s name is also missing, but the analysis reveals the speaker’s interest in learning from others to enhance Internews’ fellowship program. Internews currently runs a substantial fellowship program, which suggests a dedication to providing learning opportunities and quality education, in line with SDG 4. The speaker’s inclination to gain insights and implement best practices from other institutions indicates a proactive approach to continuously improving the program’s effectiveness.

Overall, the analysis highlights Internews’ multifaceted approach to their work, encompassing various thematic areas such as women empowerment, youth-led initiatives, human rights, internet governance, and education. The inclusion of the SDG framework signals their commitment to contribute towards the achievement of global sustainable development goals.

While the analysis provides valuable insights into the speakers’ roles and interests within Internews, it does not include specific evidence or the speakers’ views on the subjects discussed. It would be beneficial to obtain additional information regarding the speakers’ experiences, projects, and achievements to gain a more comprehensive understanding of their contributions to their respective areas of focus.

Oscar Jiménez

Two individuals, Oscar Jiménez and Mio, have emerged as prominent figures in promoting important causes. Oscar Jiménez works tirelessly at the research centre in the University of Costa Rica, dedicating his efforts to advancing the ideals of freedom of expression and digital rights. Hailing from Costa Rica, Jiménez brings his expertise and passion to the forefront in the pursuit of reducing inequalities and fostering peace, justice, and strong institutions.

Mio, an initiative based in Central America, is led by Jiménez as its executive director. Mio’s primary objective is to recover the memory of LGBT history in the region, echoing the importance of cultural preservation and LGBT rights. This endeavor is crucial in creating sustainable cities and communities while striving towards the goal of reducing inequalities.

The supporting facts for Jiménez’s involvement in these causes are noteworthy. Oscar Jiménez’s affiliation with the University of Costa Rica research centre underscores his dedication to promoting freedom of expression and digital rights. Furthermore, he is highly regarded for his work in this field, making him a prominent figure in the pursuit of reduced inequalities and the enhancement of peace and justice.

Mio, under the leadership of Jiménez, stands as a testament to the importance of preserving LGBT history and identity. As the executive director of Mio, Jiménez plays a pivotal role in spearheading this noble initiative, which strives to create a sense of identity and pride in the LGBT community of Central America. The evidence suggests that Jiménez believes in the transformative power of preserving LGBT history and identity through Mio.

The analysis indicates a positive sentiment towards both Oscar Jiménez and Mio, highlighting their commitment to important causes. The shared focus on reducing inequalities aligns with the sustainable development goals of creating just and inclusive societies. These individuals and their initiatives serve as beacons of hope, sparking conversations and actions towards a more equal and harmonious future.

In conclusion, Oscar Jiménez’s work at the University of Costa Rica research centre, advocating for freedom of expression and digital rights, and his role as the executive director of Mio, an initiative seeking to preserve LGBT history in Central America, showcases his unwavering dedication to reducing inequalities and fostering sustainable communities. Their efforts are crucial in challenging existing norms and creating a more equitable and inclusive society.

Raimundo

Raimundo and his community have achieved something extraordinary by creating their own TV channel, Radio TV Quilombo Rampa. This is a remarkable accomplishment because the community used their own resources to bring their vision to life. They operate on the principle of “from the inside out,” which emphasises the significance of ancestral communication. This approach ensures that their channel truly represents the voices and experiences of the community, providing an authentic portrayal of their culture and heritage.

The creation of Radio TV Quilombo Rampa highlights Raimundo’s strong belief in the importance of communities telling their own stories. He understands that these stories hold immense value and play a vital role in preserving cultural identity. As a platform for the community, the TV station allows them to narrate their own stories and share their experiences with the world. Through their own channel, they can celebrate their achievements, address their challenges, and showcase their vibrant traditions.

Raimundo’s eagerness to share their experiences demonstrates his commitment to promoting community empowerment. By giving a voice to the community, the TV station empowers individuals, fosters a sense of belonging, and strengthens unity. It also serves as an educational tool, imparting knowledge and information that contributes to quality education within the community.

The creation of Radio TV Quilombo Rampa aligns with the global goals of industry, innovation, and infrastructure (SDG 9) and partnerships for the goals (SDG 17). This initiative showcases how communities can utilise their own resources and collaborate to create meaningful and sustainable change. It also addresses the importance of reducing inequalities (SDG 10), ensuring that marginalized voices are amplified and included in the media landscape.

In conclusion, Raimundo and his community’s achievement in creating Radio TV Quilombo Rampa exemplifies community development, cultural preservation, and community empowerment. Their dedication to telling their own stories and showcasing their experiences through this platform is inspiring. By taking control of their narrative, they have created a media outlet that genuinely represents their community and strengthens their identity.

Dennis Redeker

Dennis Redeker, a researcher at the University of Bremen and co-founder of the Digital Constitutionalist Network, proposes the creation of a Radical Imagining Fellowship for Digital Activists. The fellowship aims to foster both education and advocacy work among digital activists, with a particular focus on reimagining digital governance. Redeker believes that fellowships have the potential to empower digital activists and facilitate meaningful change.

To ensure the fellowship’s effectiveness, Redeker emphasises the importance of gathering feedback and data from attendees and those involved in running or funding fellowships. By understanding the interests and demands of the participants and stakeholders, the fellowship models can be improved and tailored to their needs. Redeker introduced a small survey to be completed by the attendees, as well as individuals involved in running or funding fellowships. This feedback will enable the development of more effective fellowship models and contribute to the advancement of digital activism.

In addition to physical attendees, Redeker welcomes online participants to contribute to the survey. He provides his email address for them to send their results and suggests posting his email in the chat. This inclusive approach ensures that the perspectives of a wider audience are considered, enhancing the overall validity and comprehensiveness of the data collected.

The Digital Constitutionalism Network, founded in 2019, plays a key role in advancing the cause of digital activism. The network runs a database on digital bills of rights, which currently contains 308 documents related to human rights and principles in the digital realm, including areas related to artificial intelligence. The network plans to update and expand this database in the future, further contributing to the understanding and promotion of digital rights.

Moreover, the Digital Constitutionalism Network is actively involved in teaching partnerships and knowledge exchange initiatives. These initiatives aim to combine teaching with the translation of knowledge to activists. By fostering an interchange of knowledge between students and young activists, the network empowers the next generation of digital activists and provides them with the necessary tools and insights to effect meaningful change. The network also aims to broaden the reach of academic knowledge beyond traditional BA and MA programs, supporting NGOs, civil society organizations, and media organizations.

Redeker highlights the need for new governance mechanisms in non-university settings. While existing stakeholders, such as students matriculated into the university, have certain rights and opportunities, not all stakeholders in the fellowship program receive the same benefits. Exploring new mechanisms and opportunities for flexibility can help ensure a fair and equitable experience for all participants.

Lastly, Redeker emphasizes the importance of preventing detrimental competition among fellows. He suggests that selecting participants from different places can prevent direct competition and foster a collaborative and supportive environment. By implementing strategies to prevent unhealthy competition, the fellowship program can promote a more inclusive and cooperative community among digital activists.

Overall, Dennis Redeker advocates for the creation of the Radical Imagining Fellowship for Digital Activists and emphasizes the importance of gathering feedback and data from a diverse range of participants and stakeholders. The Digital Constitutionalism Network, with its database on digital bills of rights and its teaching partnerships, plays a crucial role in advancing digital activism and promoting knowledge exchange. Redeker also highlights the need for new governance mechanisms and strategies to foster collaboration and prevent detrimental competition among fellows.

Ahmad Karim

Ahmad Karim, an individual from the UN Women Regional Office for Asia-Pacific, has proposed a unique fellowship model that combines fellowship, forum, experience, and mentorship. This model aims to support and empower 30 individuals each year through capacity building programs, mentorship, strategic overviews, and connections with country offices. Fellows are actively involved in co-creating campaigns, toolkits, and updating knowledge products, ensuring their contributions have a lasting impact.

Karim highlights the flexibility of this fellowship model, which caters to the varying needs of young activists. This is particularly beneficial for activists who are also studying or working alongside their activism. Fellows have the freedom to choose their preferred events or forums and have nomination opportunities to speak at major decision-making forums and conferences. This allows them to have their voices heard and influence policy discussions.

The fellowship model prioritizes real-life experiences and practical challenges, providing fellows with valuable learning opportunities. Fellows engage in actual challenges and can relate their experiences to their activism. They also have the chance to participate in large-scale decision-making processes, effectively communicating their realities to decision-makers.

Involving fellows in program redesign and governance has proven beneficial. A group of fellows is selected to be part of the redesigning process, using their experiences to identify what works and what doesn’t. Their direct involvement leads to recommendations that improve the effectiveness of the fellowship. Furthermore, including fellows in the selection process of future fellows reduces bias and uncovers potential candidates with significant achievements.

Including fellows in the decision-making process fosters a sense of common responsibility and ownership. Although it may be time-consuming, collaborative decision-making enhances fellows’ understanding of why certain decisions are made and encourages active participation in implementation.

In conclusion, Ahmad Karim’s fellowship model offers a unique combination of fellowship, forum, experience, and mentorship. It prioritizes flexibility, real-life experiences, and practical challenges, allowing fellows to contribute to decision-making, program redesign, and governance. This inclusive approach adds valuable perspectives and fosters mutual responsibility and ownership. The model contributes to the advancement of gender equality and quality education, empowering young activists.

Eve Goumont

The speakers engaged in a thought-provoking discussion centred around the intersection of AI, human rights, and education. They emphasised the profound impact of AI on the right to higher education under international human rights law. Specifically, Eve Goumont, a PhD candidate at Montreal University, focused her dissertation on exploring this very issue, highlighting the implications and challenges that arise when incorporating AI into the educational landscape.

Moving on to the topic of fellowship programmes, the speakers underscored the significance of trust in fellows. They argued that when fellows are granted the autonomy to work on projects of their choosing, the overall outcomes tend to be more successful. In the rapidly evolving realm of technology, adhering strictly to a pre-determined plan outlined a year in advance often proves to be arduous. Consequently, cultivating trust becomes a pivotal factor in enabling fellows to adapt and make essential adjustments along the way.

Furthermore, the speakers delved into the social dynamics within fellowship communities and their impact on mental health. One notable observation was that diversity among fellows, in terms of backgrounds and areas of expertise, fosters a sense of community and solidarity. This environment stands in stark contrast to competitive environments, where collaboration and support are oftentimes lacking. Additionally, the discussion touched upon the importance of addressing mental health concerns within fellowships. Creating a sense of community and fostering solidarity among fellows was identified as an effective strategy to promote mental well-being.

In conclusion, the intersection of AI, human rights, and education is a pressing topic that requires careful consideration. The impact of AI on the right to higher education, as highlighted by Eve Goumont’s research, poses important questions regarding the ethical and legal implications of AI implementation. Trust emerges as a critical component in fellowship programmes, promoting innovation and yielding better outcomes. Furthermore, the diverse and inclusive nature of fellowships contributes to mental health and the establishment of supportive communities. Overall, these insights shed light on the complex interplay between technology, human rights, and personal well-being in educational and professional contexts.

Manu Emanuela

Upon analysing the speaker’s statements, several key arguments have emerged. Firstly, it is argued that the competitive nature of youth programmes can have negative implications for participants’ mental health. Manu Emanuela’s experiences highlight the potential problems that can arise due to the emphasis on competition within these programmes. This observation underscores the significance of considering and addressing participants’ mental well-being during the process of designing and implementing youth programmes. Mental health is crucial during the process of youth programmes due to their competitive nature.

The second argument is related to online courses, which are reported to be both difficult and inaccessible. These challenges are particularly evident for vulnerable sections of society. The difficulty level of these courses and the barrier of online access can hinder equal educational opportunities and perpetuate the digital divide. Difficulty and accessibility of online courses are challenges for vulnerable sections.

Another issue raised is the lack of continuity and long-term engagement in youth programmes. Manu Emanuela’s experiences serve as evidence to support this argument. Maintaining consistent involvement and sustained engagement of youth in such programmes is crucial for achieving positive outcomes, such as quality education and decent work and economic growth.

On a positive note, the necessity of skill development within youth programmes is highlighted. The speaker emphasizes the importance of acquiring skills like project management and grant application in order to increase success in securing grants and conducting risk assessments. However, it is pointed out that the current programmes do not focus adequately on the development of these essential skills. Necessity of skill development like project management and grant application in youth programmes.

Furthermore, the analysis brings attention to the funding aspect of civil society organisations in Brazil, noting that many of these organisations are funded by big tech companies. This raises concerns about the potential influence of these corporations on the freedom and independence of civil society. Big tech often fund civil society organisations, affecting their freedom and causing a chilling effect.

Lastly, the significance of alumni networks in youth programmes is highlighted. Manu Emanuela suggests that alumni can become mentors and provide valuable guidance based on their experiences. This recommendation aligns with the argument that continuous support and engagement, facilitated through mentorship, can contribute to the success and long-term impact of youth programmes. Importance of alumnus becoming mentors for continuous support and engagement.

In conclusion, the analysis sheds light on various aspects of youth programmes, including the potential impact on mental health, challenges arising from online courses, the lack of continuity and long-term engagement, the necessity of skill development, concerns about big tech funding within civil society, and the importance of alumni becoming mentors. These insights provide valuable considerations for improving the design and implementation of youth programmes to ensure positive outcomes and promote the sustainable development goals.

Faye

Faye actively participated in the discussion, revealing that they are currently pursuing a master’s degree in Taiwan, demonstrating their commitment to furthering their education. The conversation also touched upon the topic of higher education and career goals, with Faye expressing an openness to considering a PhD program in the future. This indicates their ambition and dedication to their academic pursuits.

Faye displayed a positive sentiment and genuine interest in the discussion, actively engaging and contributing to the conversation. This enthusiasm fosters an environment of collaboration and knowledge sharing among participants.

The main topics discussed revolved around education and career development, highlighting the importance of quality education. These topics align with SDG 4: Quality Education, which aims to ensure that everyone has access to inclusive and equitable quality education.

Additionally, the discussion touched on the subjects of communication and knowledge acquisition, illustrating a broader scope and an interest in how effective communication and knowledge acquisition contribute to personal and professional growth.

Overall, the analysis highlights Faye’s active involvement and desire for further academic accomplishments. Their positive sentiment indicates a motivation for personal growth and a commitment to contributing to the field of knowledge. The topics discussed, such as education, academic career, higher education, career goals, communication, and knowledge acquisition, are interconnected and reflect the broader context of personal and professional development.

Session transcript

Dennis Redeker:
Oh, this works. Fantastic. Welcome, everyone, to our session today. This session is called Radical Imagining Fellowships for Digital Activists. We’re today representing the Digital Constitutionalist Network and IT for Change. And we’re going to be talking about how to create a Radical Imagining Fellowship for Digital Activists. We’re going to be representing the Digital Constitutionalist Network and IT for Change here in the room. The idea is that we’re quickly introducing ourselves and the idea or the general idea of the session. And then we’ll take it from there. It’s great that you came. It’s very nice that you all took place at the table so that we could have a bit of an exchange because we have many things from you to learn about what kind of fellowships are useful. And we’re presenting the kind of thinking that we have on what kind of fellowships do we think are useful for digital governance that can be reimagined. My name is Dennis Redeker. I’m a researcher at the University of Bremen. I’m also one of the co-founders of the Digital Constitutionalist Network that educates. I’ll talk a little bit more about it in a bit. But that educates so far mostly students of BA and MA programs but has a mission, too, to do advocacy work. And then the question is how can we move into a space that allows us more to educate scholar practitioners. And I’m handing over the mic to you, Anita.

Anita Gurumurthy:
Thanks, Dennis. So the starting point of my organization is slightly converse, but I hope we achieve as much as university spaces do. I come from a nonprofit space where our work is to contribute to social justice in many ways. And over the past several years, again, I will get into it in detail later on, we have worked on capacity building for both academics, practitioners, social movement people, and also those who want to be engaged in issues of digital rights, the digital economy and digital society, and both at the level of organizing communities and at the level of policy change. Through that, we’ve had some insights on what it might take to build the kind of institutional depth and traction that is necessary so that the digital rights debates are much more accessible to regions and organizations that are under-resourced. So the session partly also addresses this coming together of two kinds of spaces so that we can co-design something from your experiences as well for the next-gen activists and scholars who might want to contribute to the domain. So that’s like a brief introduction.

Hélène Molinier:
Thank you very much. Hello, everyone. My name is Hélène Molinier. I work for UN Women. I’m managing the Action Coalition on Tech Innovation for Gender Equality. So very much here in both listening mode and really eager to see how we can find a solution to bring new voices to the digital cooperation stage and especially voices that bring a feminist lens and have a strong interest in human rights approach to digitalization. Thank you. Over to my colleague.

Ahmad Karim:
Hi, everyone. My name is Ahmad Karim. I’m also from UN Women Regional Office for Asia and the Pacific. I lead the work on innovation campaigns and advocacy and the US portfolio within Asia and the Pacific.

Manu Emanuela:
Hello, everyone. My name is Manu Emanuela. I’m from Brazil. I was a youth from Internet Society and from the Brazilian Steering Committee, and today I work with children’s rights and specifically allowing children to participate in this kind of debate as well at Instituto Alana.

Alice Lanna:
Hello, everyone. I’m Alice Lanna. Nothing to do with Instituto Alana. I’m a mentor for the Brazilian youth group today in IGF, and I would like to excuse myself in advance because we have a meeting for the youth Brazilian group at 5, so I will live in the middle of the conversation. But I’m really glad to be here because I think that’s exactly the kind of discussion we need to be having about having youth not on the menu but sitting on the table and discussing.

Christian Leon:
Hello. My name is Christian Leon. I’m from Bolivia. I’m the current Executive Director of Internet Bolivia Foundation and Secretary of Al Sur, that is a coalition of 11 civil society organizations working towards promoting digital rights in Latin America. I’m here just to learn, and if I have something, I will share it with you. Thank you.

Arielle McGee:
Hello. My name is Arielle McGee. I am a program officer with Internews for their Asia region. I work primarily with women and youth-led civil society organizations, media, and media-adjacent CSOs, journalists, and kicking off a project on human rights and internet governance, and so we are part of that, have a large fellowship program. So curious to hear what you guys have learned and how we can implement that going forward.

Faye:
Hi, everyone. I’m Faye. I’m currently a master’s student in Taiwan, and I sometimes work with or for… and I’m considering doing a PhD with him. I’m just interested in what you’re going to talk about.

Anita Gurumurthy:
What did he say? It’s on? It’s on. That’s fine.

Raimundo:
My name is Raimundo Quilombo. I live in Quilombo Rampa, Vargem Grande, Maranhão. I’m from Radio TV Quilombo Rampa, an organization that emerged in the community as a way for the community to tell their own story through popular communication. So we created a TV in the community with our own resources, a communication that we call from the inside out, which is ancestral communication. We are here to participate and share this experience with everyone who is here today.

Barbara Leodora:
I’m going to translate for him. Raimundo, he’s a Quilombola from Quilombo Rampa in São Luís do Maranhão, Brazil. He created Radio and TV Quilombo at his community, which they define as inside-to-inside communication. And he’s a popular communicator. We’re here together today. I’m Bárbara Leodora. I’m from Article 19, Brazil. And I am responsible for a campaign of ours that we created in the title, for a campaign of ours that we created in the time of the pandemic, which is a campaign which provides fellowships for popular communicators in the whole country, where they can provide knowledge and information. And we had two editions, 2020 and 2001, for popular communicators to inform the public about the pandemic. And then we had one last year for the elections, because we figured it was an effort to guarantee that it would happen, our elections, and it did. And right now we’re having a casual agreement edition. So we’re here to learn and exchange experiences on this fellowship, because it’s a great thing we’ve done. I’m very proud of it. And I’m very proud to be here, excited.

Oscar Jiménez:
Hi. My name is Oscar Jiménez, Oscar Mario Jiménez. I am from Costa Rica in Central America. I work in University of Costa Rica in a research center that promotes freedom of expression and digital rights. And also I am executive director of Mio. It’s a museum of identity and pride. It’s an initiative to recover the memory of LGBT history in Central America. So I love the title of session. So I’m here to learn.

Eve Goumont:
Hi. I’m Eve Goumont. I’m a PhD candidate at the Montreal University in Canada. I’m also a guest researcher at Keio University here in Tokyo. I work in AI and human rights. And my dissertation focuses on the impact of AI on the right to higher education under international human rights law.

Dennis Redeker:
So one of the first things you like to do, because we know that sometimes when it’s getting toward the end of a session, people have to leave or one forgets. We have a small survey that we’d like to show you a link. If you could fill it in, that would very much help us to better understand what demands and interests of people in fellowships. There’s also questions that you can answer in there in case you run a fellowship or in case you provide funding for a fellowship. Just to get some resources together, we’d like to learn from this as we develop our own models. And we’re going to show the link in a second. It’s a short link.

Anita Gurumurthy:
I just wanted to say that it’s a very short survey. And if you can leave your email IDs, we’ll be happy to also share the analysis of the survey with everybody. That’s precisely to account for the conflicting priorities we sometimes have at the IGF and therefore the voices are not carried right through to the end. So that is the link. And I’m going to also circulate this. Please write in a little bit bold or something, capitals. I’m not very good at deciphering handwriting. So I’ll start from here. Would you be interested in receiving a copy of the… Do you need a pen? Yes. So soon after, we can open up the session. So we’ll probably take about five, ten minutes for this.

Dennis Redeker:
And one comment to those online. Welcome again. And you’re obviously also welcome to fill in the survey. And let us know your email addresses. I will post mine. So this is most privacy preserving. I’ll just post my email. Does that make sense? Post my email here in the chat. And then you can send me an email with your email address. Or just send me an email and we’ll send you the results as well.

Audience:
Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. We can do a webinar or two on that. We can do a webinar. I would like to do that. You can just let me know. Yeah. Okay. I’ll do it. I’ll do it. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. That’s good. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay.

Dennis Redeker:
So how is everyone doing on the survey? Finishing up? Okay. Wonderful. Just wanted to make sure. Then I’m going to share my slides, so I’m quickly going to say something about the current activities. And then we jump into our exchange. Let me share these. As I said, I’m one of the co-founders of the Digital Constitutionalism Network, founded in 2019 in Bochum, Germany, by academics from mostly Europe, but also from around the world. We’ve had people from, currently people from all continents represented in the network and the network is focused on research, but also teaching and training, that’s what we’re going to talk about today, and advocacy, that’s also related to today, in the field of human rights and the internet, well, obviously that’s the focus here. We run a database on digital bills of rights, so documents that proclaim rights and rights and principal claims. We have now a database, I’ll show this in a sec, but we have a database of 308 documents that we have assembled. It’s a great resource for research activities, but we also have an annual teaching partnership, which we now do online and in person, and we are now currently planning for long-term, one-year research incubators to be conducted. This is how the database looks. You can check it out. It is, I think, a helpful research tool also for advocates, for advocacy, to see what other documents are out there that demand human rights and principles on the internet in a digital field, also related to AI, and this database is going to be updated very soon. We do research with this. Just last week, we spent a week with students from across Europe and Italy in order to teach human rights online using, among other things, these documents and the database, and going forward, the Digital Constitutional Network not only wants to partner with additional partners, and we’re working with IT4Change, for example, we’re open to other partnerships, but we’re thinking about, on the one hand, how can we combine the teaching that we do with our students into translating knowledge to activists, to young activists who come in and who can benefit from this interchange, or ideas to create year-long research incubators by which people from different ways of life, walks of life, can join. They’re being supported by members of the network and of partners, by expert advisors. There will be MA and PhD students among those who receive part of this fellowship cohort, but also members of civil society groups, NGOs, or independent young researchers. So that’s pretty much our pitch, our idea. We’re still working on this, again, working with IT4Change, but we want to be more open. We’ll have some more open discussion, not on this, but just on the things you all do and things that you can advise us also on doing when we pursue such a scholar-practitioner route for teaching, which is something new for most of the people at universities. We often are geared toward the BA, MA programs that we have, but we often neglect, I think, the people that would otherwise have an opportunity to also gain academic knowledge, but who have a background in an NGO or civil society organization, or any other place, really, media organizations, for example.

Anita Gurumurthy:
Thanks so much, Dennis. I just wanted to add a couple of things. This is, I think, the 18th IGF, and in many ways, even five years ago, if you actually looked at people working in the grassroots on various issues that occupy the time and energies of social movements, you would find that they don’t understand what digital rights are. They don’t, perhaps, engage with those issues in such a palpable way, but I think very rapidly since the pandemic, that situation has changed, and although there used to be the idea of digital activism, or using digital spaces for activism, in the past few years, movements have begun to embrace the idea that their own issues are beginning to be redefined by digitalization, so those who have been working on education, for instance, have had to grapple with ed-tech technologies, and others, for instance, in the health domain are quite worried about what happens to health data and cross-border data flows, et cetera, et cetera. Big justice activists are really grappling with the idea of trade agreements and algorithmic non-transparency in developing countries because of free trade agreements, so the field is changing, and younger people are beginning, really, to understand and grapple with these issues, and by that, I don’t mean to over-generalize who these youth are, but I think, broadly, talking about those who normally are part of the very, very fabric of struggle and dissent in their own contexts, and people who are really showing the way in terms of the pluralistic dimensions of human rights that we would really like to present, right? So we have, at IT for Change, done two rounds of, I would say, mentoring of fellows. One was on the digital economy and gender through a feminist lens, and that was very rewarding. The other one was an in-situ, one-week fellowship program that we did in Thailand earlier this year, which was called Frames and Frontiers for Digitality. If you would have two minutes, maybe you can say something from your, and you could, maybe the two of you that have to go, is it already time? Okay, I wouldn’t, it’s already five, yeah, I understand, so that’s fine. So on the Frames and Frontiers for Digitality, we really found it extremely useful that we brought people who are mid-career professionals from different organizations, and they had a lot to say about how they would shape their programs, and the programs that they were holding either as officers in, let’s say, large organizations that were working on poverty and development, or organizations that were working on digital rights. So these were very instructive. One of the things that came out was, are the existing fellowships for digital rights leading us, in a way, to a kind of individualized paradigm where institutional strengthening is not happening? That was one of the questions that tech fellowships typically tend to privilege certain kinds of fellows, who may not then contribute back, or the entire structure of these fellowships may not allow the contribution of their work to sustain social movement. So the effectiveness of these fellowships was something that was called to question, and that’s a problematic, just a provocation to analyze, and I just wanted to put it on the table. We just have two broad questions for the session, and maybe Dennis, you can…

Dennis Redeker:
Yes, so there are obviously many questions that we have. We have proposed two things to discuss, and we’re very happy if you bring in your other questions and your questions for discussion. The first one here, also on the screen, would be, what does the current landscape of funding and fellowships for young activists working in the digital spaces look like? So what is out there? So what can we kind of collect also here as a brainstorming? Who wants to start?

Manu Emanuela:
Hey, everyone, I think I’m going to share a little bit about the youth programs that I’ve been a fellow, and how they are organized. So it’s not exactly funding, but more like fellowship, and the first one, I had a course, and one thing that I think it’s really important to think about is mental health during this kind of procedure. Why? Because the two programs that I participated, they had a competition vibe. So people were really competing against each other to be able to reach this opportunity, and this caused a lot of problem. So this is an issue. I think another one is the fact that the courses, they are very difficult, and they are online, so you have the accessibility things, and something that makes me wonder if the most vulnerable people, they are able to reach this opportunity, and even the fact that it’s very difficult for understanding infrastructure issues, and understanding this debate, and then competing against each other to be able to get the fellowship. So all of this, I think, are things that we have to face it. Another thing that I think that the programs today, they allow you to reach opportunities, like I went to the 2018 IGF, and this is the only reason why I’m here today. It changed my life, because I saw this is what I want to do. I want to participate in this space. But you don’t have a lot of continuity to be able to, in these programs, at least in Brazil, from my perspective, to be able to continue your engagement. So what I did was I will go after civil society and the NGOs that exist, and I will try to get a job, and that’s basically what I did. But I don’t see a lot of youth-led organizations in that sense that are like, oh, let’s empower youth, and because of this program, let’s do an organization. I think there are a few aspects of this, like the importance of project management skills and skills to go after funding and do risk assessments, all of these things that we know that it is important to apply to a lot of grants. And these programs, they do not help you with these kind of skills and abilities. So this would be very important to empower people through fellowships so that they can form organizations. And another, the last one, why I think this is so important, it’s because the landscape of civil society in Brazil today, a lot of organizations, they are funded by big techs. And well, if you are funded by the private sector, you have a few particularities on what you can do, on what you can speak, even if it’s very open, but well, you are funded by them. So the importance of having freedom, not the chilling effect in that sense, of the funding opportunities that we have. So these are a few of my considerations. I think that, and I think that it’s very transformative to be able to participate in this kind of event, and this should continue, but also allow to more long-term engagement. And for alumni network, like how alumni can become mentors and that they can help and engage people with their experience and build, well, futures alike. So that’s, I think I approached the two questions a little bit, and maybe later I can share some other thoughts. Thank you. Yeah? Okay.

Ahmad Karim:
Well, with the UN Women Regional Office for Asia-Pacific, we have kind of designed a very different model for our fellowship. It’s more of like a flexible combination of fellowship to a forum, to an experience and mentorship program that runs for a whole year. Instead of a short term. So we have this group, we call the 30 for 2030. Every year we select 30 from across the region, different field of experiences, and where they, from UN Women’s side, we give them a capacity building program, a mentorship with our advisors, connection with the country offices, so they have the data and evidence and the strategic overview of the organization of what’s happening in the region, but also connected to the country offices and where the implementation is happening. But that doesn’t stop there, so that’s more of like a preparation phase, and those group are amazing. They are like leaders and activists and CEOs of companies and researchers. So what we do with them after that, we work with them in co-creation. So the co-create campaigns, right now we’re launching an online toolkit, an online GBV, co-designed with them, and it, you know, what’s really unique about the work that they do, that they want to create a knowledge product as a living system. So it’s not just a knowledge product that we’ll put it on the shelf, put it online, and forget about it. No, we update it every year. So last year we launched one version, this year we’re updating, adding more forms of violence, for example, innovation, the stakeholders who are working on it, and then building campaigns and other initiatives on the ground. It’s being translated into nine languages right now in the region. And then other experiences that, you know, some of our members do really like, some flexibility to attend their own preference events or forums, conferences. So they get to select some of those events, and we support them, whether it’s financially or with nomination, to attend some of the big forums. We also nominate them to be speaking on decision-making forums, like big conferences, CSW, the General Assembly. So it’s kind of a pre-selected group where we invest on them, but at the same time they are actually the one who is giving us their expertise and what they know. And then at the end of the year, they have the choice to stay for another year as more of a senior fellow, and then mentor the other ones who is coming and be part of that pool of opportunity. Or they can just move on, but all of them, they decided to stay. Yeah. So I think it’s one good, you know, practice that we find is that that flexibility to give to, especially for young activists. A lot of them, they’re studying, they have their work, they’re doing other amazing things. And I think it should be giving them the flexibility to come in when they need to, pull out when they need to get attention to other works in their life, especially if they’re not like paid or will pay for some of that work. To take a break when their mental health is needed, and I think they have that. When they have that flexibility, they give you 200% of their time, because they come at it their own terms. Also getting them involved in practical challenges and giving them real life experiences that they relate to it, and they don’t have to attend other things that is not of their expertise or relevance. I think, I mean, from our, we have a privilege at the UN organization where we can nominate them to large scale decision making processes, and I think this is where they see the value of sitting with minister and head of states and be able to communicate the reality of their life. And I think it’s very different when they say it than when we say it, or when older people say, oh yeah, I was young once. I know what you feel, like no, you’re not. This is a different reality, and I think it needs to be said by them, themselves. Thank you.

Alice Lanna:
I could afford two more minutes and come back, because this panel is really very interesting, so I’m glad that I can help, I can collaborate as well. Just to go really, really quickly on these two questions, one thing that I do feel that I miss in some of the funding processes. It is the ability for the person who is being funded to participate in the design of the process, to be heard, not only be thrown into the process as a tool that will be sent through all the phases, but someone whose opinions will also be listened to, and in this way they will even engage more, right? They will feel that their opinion, not only on the content, but on the process as well, is valuable, and maybe for the funders, they will not get exactly the results they wanted, they envisioned, but maybe they will get a better result in a different form, so I think that’s one approach that I would like to bring. Another thing that I think relates to what we’re talking here is we have to have this balance between mentoring and trust, because I think it’s important when we’re talking about funding to have someone who’s there for the process and listening and helping, but there also must be trust in the sense of flexibility and understanding that the person who is being funded or the organization who is being funded is not like an empty vessel that needs to be filled, it also has their experiences and a lot to teach as well, not only learn. So I think I kind of gave this overview on the issue, but if I had to choose two words, I would say it would be this balance between guidance or mentoring and trust in the person who’s there. Thank you.

Barbara Leodora:
Hi, I wanted to talk about our experience in Article 19 Brazil. This campaign started in 2020 with the pandemic. We had this money that we would use to activities and things that we’re doing in person and we couldn’t anymore, so we figured we would reroute these two popular communicators because we figured they were the most qualified people to inform the Brazilian people on the state of the pandemic and what we should and shouldn’t do, especially because at the time we had a government that was spreading misinformation. So we had to come back to that. And I took some notes on things that we’re proud of in this program that we kept doing and making it better. So the first thing is that it’s not technically a digital rights fellowship, but it’s also extremely related to it because all of these fellows, they do what they do online. They do what they do digitally. They do what they do using technical things, technology. And also, we are thinking about a next edition specifically on digital rights. So I hope we can do it. The first thing we realized at the time is that we couldn’t have rules for the spending of the money. This is because, firstly, it was a pandemic. And people were without their jobs. They were without their normal lives and everything else. And also, because in Brazil, we have a very extensive territory and extensive different realities. So we figured we couldn’t have rules on like, oh, you have to spend this money on buying stuff to produce the information. You have to spend this money on this and not that. Because we had people at the time using the money to pay bills. So this was the first rule that we decided on. And I think it’s the most important one. I think it’s the most valued rule or non-rule that we have in this program and that people appreciate more. And I think this created a trust between us and our fellows. And I think it’s about respect, too, because I think we’re trusting them to do, you can do whatever you want with it. We’re just trusting you’re going to keep doing what you’re doing that is communication, qualified in communication. And this is about respecting their identities. It’s about respecting their agency, and their autonomy, and their realities. Because we figured we shouldn’t do that. We shouldn’t. Also, in the formats of the productions, like we couldn’t tell them, oh, you need to do a three-minute video. Because each reality was different. We had people that were doing things that were not necessarily as we think about popular communication. Because there were dialogues with indigenous people. There were dialogues with quilombola territories. So I am very proud to tell this every time we have new fellows to say, no, we’re respecting your identities. We’re respecting your autonomy. And it’s also great to see the reaction to it. Also, capacity building, this was something that we implemented over the additions. And the last one, and this one, we have courses, and workshops, and creating this capacity for them to produce what they produce already so well, but in specific things. Also, the community network building, we see the campaign and the fellowships as a group. So we also have WhatsApp groups, so we can communicate, because they’re all over Brazil. And they also cited this already, this community creation, where they also participate in the design of the program. They also make decisions with us on what they’re doing on the results of this. And lastly, the active engagement of the fellows with the rest of the organization. ArtCon Brazil 19 has four or five thematic areas. And they all interacted with the fellows all throughout the campaigns, all throughout the additions. We call them to give interviews. We call them to new projects. We call them to things that aren’t necessarily part of the fellowship program, but we are engaging them with the ArtCon 19 all of the time. And I think that’s it. And mutual learning, which was also already cited, because we love to hear what they felt about the addition, about the fellow, and then implement new and better things in the next additions every time. I think that’s it. Thank you.

Eve Goumont:
Short, yeah. I’ll build upon what’s been said there. There’s two things that I found super interesting. The first one being trust, and if you want to maybe trust us, and maybe you’ll get something better than what you were expecting. And I think it is something that is extremely valuable as a comment. Because oftentimes, I feel like when you have to apply to fellowship, you have to say, in a year, I’ll be working on this or that. Technology is fast paced. It’s hard to be capable of working on the thing you said you would be working on when you apply. So trusting fellows and allowing them to work on whatever they want to work on, oftentimes, it gives good result, I think. I hope so. And the second thing is mental health and competition. This is something that we like, at least as academic family community. And if fellowships were able to provide that to create a sense of community and solidarity between people, I think it would be super interesting. And I have noticed that it often happens when you don’t have people who all look the same, when you have people that are quite different, that come from a different field of expertise, different countries, and the competition is less present in those circumstances. Because you can learn from one another instead of competing with each other. Yeah.

Dennis Redeker:
Thank you very much. Just looking online, is there anyone online who wanted to intervene? Doesn’t seem to be the case. We do have a few minutes left. Does anyone want to comment, again, come again, or? Yeah, I sent a message and asked. No, it doesn’t seem to be anyone online. Any questions here in the room? I can add one more. Please.

Anita Gurumurthy:
Thank you. I’m sorry for your big line.

Ahmad Karim:
I think one thing that we learned also from the past experiences is that engaging the fellows in the redesign of programs is very important. And having them to be part of the governance of the program itself. I mean, what we did is that we chose a group. I mean, we got a nomination from the group itself to be part of the redesign of the next phase. So they drove from their experiences, what worked, what haven’t, what would they recommend, what they have loved to do more on this kind of activity. And then we picked up those, and we made them the core of the group. Also, when we included them in the selection process, so there was also a group that we asked them different representation from the group to be part of the selection process of the next fellows. That gave us a little bit of, well, first, they knew a lot of those people. They gave us more insights. And they also had more, like, less bias. And they brought that perspective of they are on the same level. So they know what certain people would say. But also, they could feel that certain fellows that doesn’t have the capabilities to market themselves, because a lot of great people are too modest. But they also know about some of their great work. And they were saying, no, we know that person is really amazing, and they’re doing this. But they did not mention this on their application. And that was really helpful to have on the ground validation of our members. And I think that’s really helpful to have that check from the ground and the connection and involved and redesign the program every time you get a chance to do it. And I think having mutual responsibility in the management also really helped. Well, it elevates some of the workload from us. But it gives us also more chances to get that responsibility. So they also feel, from the perspective of the management of the fellowship or the program, what is happening and why certain decisions are made this way. But by including them, we also get insights of other ways to do it. Or it could be easier or faster. Sometimes it’s not faster. But at least when we make a decision together, it’s common responsibility. And people feel good about it because it’s our own decision. We’re in this together. So when there is that freedom to make the decision together, it takes time. But it’s really helpful to just get it, to get them all on board on this decision that they might be affected by.

Dennis Redeker:
Thank you so much, everyone, actually. If I can may just reflect very shortly on our current plans and what I’ve heard. This is so inspiring to hear these things, things that we partly haven’t thought about so much. I think as a digital constitutionalism network and also with the cooperation, we can learn a lot. I mean, thinking about participation in governance coming from a university setting that often is assumed. So when we teach, we know there are formal roles for students to have in university governance. But if we branch out and if we engage other stakeholders, then this doesn’t necessarily apply. If we think about this being open as a training for civil society, it doesn’t mean that people are matriculated into the university. So they don’t have the same rights necessarily. So we need to develop new governance mechanisms. And we can be more, I think, flexible in that sense with new mechanisms that might even be better than the ones we have. And so many things that I haven’t actually thought about. The question of your solution to the question of competition being people from different places, that really doesn’t put them in the kind of competition. I mean, you in Asia-Pacific have this automatically if you take one person or two from each country. So this was really, really, really helpful for us to think this through. And we were developing this. We’re submitting this for grant applications. And we’ll also update everyone who submitted the email address today if there’s something coming through. And anyways, we’ll be taking a note. We’ve taken notes. We’ll put this on the IGF website, obviously, after the session. And you’ll get the survey results. Anita, is there anything you want to say?

Anita Gurumurthy:
I just want to thank everybody for being so generous with your reflections. Because I think there is a wealth of experience coming from different standpoints. And thanks for the candid feedback and your time to fill in the survey. Thank you. Yeah. I want to give a round of applause to Juan Frazier.

Audience:
Yeah. Yeah. Okay. I don’t know, I’m just following orders. What do I know? No shit. Okay. And we’re out one page. Now I’m kind of confused. Okay. Okay. Okay. Okay. So should we stay close to the folks there?

Ahmad Karim

Speech speed

166 words per minute

Speech length

1311 words

Speech time

474 secs

Alice Lanna

Speech speed

136 words per minute

Speech length

430 words

Speech time

189 secs

Anita Gurumurthy

Speech speed

157 words per minute

Speech length

1047 words

Speech time

400 secs

Arielle McGee

Speech speed

147 words per minute

Speech length

81 words

Speech time

33 secs

Audience

Speech speed

59 words per minute

Speech length

179 words

Speech time

181 secs

Barbara Leodora

Speech speed

137 words per minute

Speech length

1035 words

Speech time

454 secs

Christian Leon

Speech speed

137 words per minute

Speech length

66 words

Speech time

29 secs

Dennis Redeker

Speech speed

163 words per minute

Speech length

1603 words

Speech time

589 secs

Eve Goumont

Speech speed

156 words per minute

Speech length

326 words

Speech time

125 secs

Faye

Speech speed

158 words per minute

Speech length

42 words

Speech time

16 secs

Hélène Molinier

Speech speed

152 words per minute

Speech length

91 words

Speech time

36 secs

Manu Emanuela

Speech speed

185 words per minute

Speech length

683 words

Speech time

222 secs

Oscar Jiménez

Speech speed

145 words per minute

Speech length

87 words

Speech time

36 secs

Raimundo

Speech speed

182 words per minute

Speech length

91 words

Speech time

30 secs

Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The focus of the analysis is the significance of national partnerships between the private sector, civil society, and governments in establishing a robust data economy. These partnerships are deemed essential at a national level, bringing together stakeholders to collaborate on developing and managing data resources.

The argument presented highlights the necessity of national partnerships for effectively implementing a data economy. The involvement of the private sector, civil society, and governments is seen as vital in addressing the challenges and opportunities related to data sharing and utilization. The analysis stresses the need for all relevant parties to form synergistic partnerships to effectively tackle these issues, as they are critical for progress.

Additionally, the analysis emphasizes the need for an intermediary to oversee and manage data pooling. Data pooling is deemed valuable for companies as it enables greater access to diverse data sources. However, commercial sensitivity issues can arise from data pooling. Therefore, an intermediary is recommended to facilitate and navigate the complexities of data pooling, ensuring that commercial sensitivities are appropriately addressed.

Furthermore, the analysis advocates for the creation of universal international standards for data sharing. The presence of different data regulations and licenses in each country is considered an obstacle to effective data sharing. Government obstructions in accessing data are also recognized as hindrances to technological advancement. To overcome these challenges, the establishment of international standards is recommended. This includes considering South-South cooperation in standard setting to ensure comprehensive and inclusive standards.

Lastly, the International Governance Forum (IGF) Secretariat is specifically mentioned as being tasked with setting international standards. This underscores the recognition of the IGF’s pivotal role in developing standards and guidelines for the governance and management of data resources on a global scale.

In summary, the analysis highlights the importance of national partnerships in fostering a robust data economy. The involvement of the private sector, civil society, and governments is pivotal in tackling challenges related to data sharing and utilization. The need for an intermediary in managing data pooling, the creation of universal international standards for data sharing, and the role of the IGF Secretariat are all key points addressed in the analysis. Overall, the analysis provides valuable insights into the considerations and recommendations for the effective development and management of data resources.

Helani Galpaya,

Public-private data partnerships are crucial in understanding and achieving the Sustainable Development Goals (SDGs). Microsoft, a key player in these partnerships, not only shares data but also provides infrastructure, capacity building, and sets standards. However, there is a trade-off for companies like Microsoft between generating revenue and undertaking philanthropic work due to their commercial nature. Balancing these trade-offs requires careful consideration and strategic decision-making. Investing in low digitally connected countries is seen as a long-term vision that can contribute to achieving the digital SDGs and bridging the digital divide. Data protection laws pose challenges to data sharing and research collaborations, but techniques like federated learning offer potential solutions to work around these restrictions. Data pooling can also maximize the value of data by pooling resources from multiple companies and government departments, leading to collective insights. To ensure data privacy and security in data pooling scenarios, it is important to involve a trustworthy party. By leveraging the expertise and resources of both the public and private sectors, we can make progress towards the SDGs and create a sustainable future.

Rodrigo Iriani

The Latin America and Caribbean region faces challenges in the data ecosystem and requires increased participation from the private sector. It is known to be one of the most unequal regions in the world, with limited involvement from the private sector in the data ecosystem. Successful examples of public-private data partnerships invest time and effort in building trust, establishing proof of concept, and adapting value propositions. These partnerships align their projects with the mandates of development, human rights, and democracy, showcasing the potential for collaboration between the public and private sectors.

Active government and private sector initiatives play a crucial role in providing digital and data skills, capacity building, and employability. Philanthropic support and development projects have seen a noticeable shift, with examples such as a project in Jamaica that aims to train youth in digital skills and data literacy. This positive sentiment reflects the efforts made to bridge the skills gap in the region.

To achieve the Sustainable Development Goals, synergy between the private sector, government entities, and civil society is necessary. Multilateral development banks require more collaboration between these stakeholders, emphasizing the significance of collective action in addressing challenges and achieving sustainable development.

The established platform and relationships with multinational organizations, governments, and big companies generate trust for new partners, demonstrating the potential for future collaborations. However, a challenge lies in gathering new funding, as some private sector entities and international donors do not cover operational costs. This negative sentiment highlights the need for alternative funding sources or increased support from these entities.

Adapting the current model is necessary to continue making an impact on the communities being served. This neutral sentiment acknowledges the need for flexibility and evolution to effectively address the evolving needs of the region.

National partnerships and private sector data are crucial for social and economic development. Successful partnerships between the private sector and national ministries have been observed, using data for social and economic development, such as through hackathons. This positive sentiment underscores the potential of data-driven collaborations for positive change.

National ministries should be involved in discussions about data use from the outset, as their involvement is crucial in defining data sets and the focus of solutions. This sentiment aligns with the goal of inclusive and equitable decision-making processes.

The private sector should be flexible and open when working with government entities and should share best practices, considering their different operational approaches. This positive sentiment recognizes the importance of collaboration, knowledge sharing, and adaptive strategies to achieve common goals.

The public sector should strengthen its capacities and develop a data culture. This positive sentiment emphasizes the importance of building the necessary skills and mindset within the public sector to effectively utilize data for decision-making and governance improvement.

In conclusion, the Latin America and Caribbean region faces challenges in the data ecosystem, but opportunities for improvement exist. Increased private sector participation, active government and private sector initiatives, and synergy between stakeholders are essential for achieving sustainable development goals. Building trust, adapting models, and addressing funding challenges are necessary steps in driving positive change. National partnerships, inclusive decision-making processes, and knowledge sharing are vital for social and economic development. The public sector should focus on capacity building and fostering a data-driven culture. Through these efforts, the region can overcome its challenges and pave the way for a more prosperous future.

Mike Flannagan

Microsoft has demonstrated a strong commitment to supporting nonprofits worldwide by providing nearly $4 billion in discounts and donations. These contributions aim to facilitate the work of nonprofit organizations and help them fulfill their missions more effectively. In line with this commitment, Microsoft has developed the Microsoft Cloud specifically for nonprofit organizations. This cloud solution is designed around a common data model that addresses the specific needs of nonprofits, such as attracting donors and delivering programs at scale. By leveraging this common data model, nonprofit organizations can access and utilize technological solutions more easily and affordably.

In addition to their financial support, Microsoft embraces corporate social responsibility (CSR) and actively tracks their work against the Sustainable Development Goals (SDGs). They have initiated collaborations across various sectors, including nonprofits, universities, companies, and governments, to promote data sharing and access. Through these collaborations, Microsoft aims to foster partnerships for achieving the SDGs and drive positive social impact. This approach highlights Microsoft’s belief in the importance of community engagement and their dedication to making a difference through technology.

Microsoft acknowledges the significance of open data in driving impact, even though full openness may not always be feasible due to privacy or commercial concerns. They emphasize the value of utilizing data in a more open manner to break down data silos and promote transparency and collaboration. This stance reflects their understanding of the importance of balancing data privacy and the benefits of sharing data for greater societal good.

Furthermore, Microsoft advocates for inclusive economic growth. They emphasize that economic progress should not only benefit shareholders but also consider the well-being and prosperity of a broader range of stakeholders. This is evident in the way Microsoft structures compensation for executives and employees, aligning it with principles of inclusive growth.

In line with the technological advancements of the modern era, Microsoft recognizes the urgent need for building skills globally, with a particular focus on technology, data, cybersecurity, and AI. They acknowledge the existence of a global skills gap in these areas and view their investments in skill building as beneficial both for the world and the future of their company. By championing skill training and development in these critical areas, Microsoft aims to empower individuals and enhance employability in a rapidly evolving digital landscape.

Mike Flannagan, a representative of Microsoft, views the collaboration between Microsoft and governments worldwide as highly valuable. Such collaborations enable governments to leverage Microsoft’s expertise and technology to address common societal challenges effectively. Flannagan also supports the adoption of global standards for data privacy and protection. Standardization in these crucial areas would simplify operations on a global scale and ensure consistency and compliance across borders.

Overall, Microsoft’s commitment to supporting nonprofits, tracking their work against the SDGs, promoting data sharing, advocating for inclusive economic growth, building relevant skills, and collaborating with governments reflects their dedication to driving positive change and using technology as a force for good.

Darlington Ahiale Akogo

Forming partnerships between the public and private sectors can be challenging due to language barriers and differences in procedures, often leading to frustration. The public sector, such as the government, possesses the reach and assets, while the private sector, particularly start-ups, offer innovation. However, their differing communication methods and procedures can create obstacles in establishing effective collaborations.

One solution to ease the formation of public-private partnerships is to have a facilitator with experience working in the public sector. This individual can bridge the gap between the two sectors and facilitate engagement. Additionally, international development agencies, experienced in working with both public and private sectors, can contribute to the formation and facilitation of these partnerships.

Successful examples of public and private partnerships exist, particularly in agriculture and healthcare. These collaborations have led to significant projects and data collection. For instance, a company formed partnerships with public health institutions, gaining access to a hundred years of data on Africans. Another project in agriculture involving the government and a public university resulted in the creation of the largest disease and pest data sets in the world. These success stories highlight the potential for effective collaboration between public and private sectors.

Adhering to data protection laws is crucial when handling sensitive data in these projects. It is important to consider the data protection laws of the country of operation. Even in the absence of specific regulations, following a standard like the General Data Protection Regulation (GDPR) ensures the secure handling of sensitive information. Maintaining data privacy and security is vital in public-private partnerships.

The political will to form partnerships is crucial for governments. Partnerships can help address the toughest challenges within a country by utilizing data-centric or artificial intelligence solutions. Governments should recognize the potential benefits of collaborations and demonstrate the necessary commitment and support to foster their formation.

Furthermore, the success of public-private partnerships within a government often relies on internal political agreement and consensus. Merely having a few agencies willing to fund these partnerships is insufficient; there needs to be broader recognition and agreement within the government. Creating an environment where different agencies within the government understand and value the potential impact and efficiency of collaboration is essential.

In conclusion, forming partnerships between the public and private sectors can be challenging due to language barriers and differing procedures. However, having a facilitator with experience in the public sector and involving international development agencies can facilitate the formation and success of these partnerships. Public and private collaborations have the potential to achieve significant milestones and data collection, particularly in agriculture and healthcare. Adhering to data protection laws and regulations is crucial when handling sensitive information. Governments should demonstrate the political will to form partnerships and strive for internal consensus and support within their agencies. By doing so, they can effectively address the toughest challenges within their countries, harnessing the power of partnerships for the benefit of all.

Mona Demaidi

The analysis highlights the crucial role of international collaboration and data sharing in AI research, supporting SDG 9: Industry, Innovation and Infrastructure, and SDG 17: Partnerships for the Goals. Accessible data is essential for AI research, and pooling resources, such as computational resources and talent, proves beneficial. Cross-cultural understanding is also important for translating research into a global aspect.

Ethical considerations, including transparency and gender equality, must be prioritised in AI development, aligning with SDG 5: Gender Equality and SDG 16: Peace, Justice and Strong Institutions. However, ethical challenges, such as data privacy, security, and transparency, pose obstacles to international collaboration.

The lack of a structured protocol for data-sharing between different countries hinders progress in AI development. Harmonising legal frameworks to achieve transparency is a challenge, and data use and deployment must consider various aspects of data.

The MENA region lacks a legal framework for data privacy and protection, leading to hesitancy among the private sector in providing data due to uncertainty about the benefits of AI. Efforts are underway in the region to establish international standards for data sharing and create an AI ethics strategy.

Governments need to establish a governance structure to ensure the involvement of all stakeholders. The private sector should better comprehend the potential benefits of AI and the significance of structuring and labelling data to contribute to SDG 9: Industry, Innovation and Infrastructure.

In conclusion, international collaboration and data sharing play a vital role in AI research. Ethical considerations, challenges in data-sharing, and the absence of a legal framework for data privacy and protection need to be addressed. Efforts are being made to establish international standards for data sharing in the MENA region. Collaborative involvement and data sharing are key to efficient AI use and achieving SDG goals.

Philipp Schönrock

The analysis highlights the significance of public-private data partnerships in achieving the Sustainable Development Goals (SDGs). The speakers stress that the most successful initiatives are those where partners invest time and effort to establish a proof-of-concept, build trust, and adapt and iterate the value proposition over time. These partnerships play a crucial role in addressing complex global challenges.

However, the analysis also acknowledges that there are significant challenges in initiating, completing, monitoring, and scaling up private-public data initiatives. One major hurdle is the lack of coherence across under development and standard operating procedures. This lack of consistency hampers the efficiency of public-private partnerships for SDGs, particularly in the Global South. The enabling environment required for these partnerships is lacking, despite initial hype surrounding their potential. Overcoming these obstacles is essential in order to fully leverage the potential of public-private partnerships for sustainable development.

In addition to public-private partnerships, the analysis emphasizes the need for collaboration among the data, tech, and statistical communities. There are still critical data gaps that need to be addressed in order to better understand and tackle important global issues such as climate change, poverty, and inequality. Closing these data gaps requires the convergence of expertise from the data community, tech community, and official statistics. Through this collaboration, a comprehensive and accurate understanding of these issues can be achieved, leading to more effective strategies and actions.

Overall, the analysis underscores the importance of public-private data partnerships and collaboration among different communities to achieve the SDGs. The success of these initiatives hinges on trust, adaptability, and investment of time and effort. By addressing the challenges and working together, stakeholders can unlock the full potential of data-driven solutions for sustainable development.

Isuru Samaratunga

A research study involving 94 countries has emphasised the value of public-private data partnerships in the Global South for monitoring and achieving Sustainable Development Goals (SDGs). The study identified a total of 394 data actions within these partnerships, with a specific focus on SDGs related to climate action, sustainable cities and communities, and good health and well-being.

It is important to note that not all SDGs hold equal importance across different regions. Climate action, sustainable cities and communities, and good health and well-being were found to be the most commonly addressed SDGs in these partnerships. This implies that these particular goals are deemed more urgent and relevant within the Global South.

However, the research also acknowledged certain challenges in establishing successful public-private partnerships. Time and trust-building were identified as crucial elements for ensuring the success of these collaborations. Partnerships require dedicated efforts and active participation from both public and private entities. It is imperative to invest time in developing trust between the stakeholders involved to overcome potential obstacles and achieve desired outcomes.

Furthermore, the research suggests that large firms with global reach are better equipped to sustain these relationships. Their established networks and resources make them better positioned to navigate the complexities of public-private partnerships. This observation could have implications for future partnership establishment, with an emphasis on involving influential and globally connected corporations.

Another key argument presented is the significance of having a legal framework in place to enable and support these partnerships. A well-defined legal framework can provide clarity on the roles and responsibilities of each party involved, facilitate decision-making processes, and offer protection for all stakeholders. The presence of a legal framework can enhance the effectiveness and efficiency of public-private partnerships.

Additionally, the research highlights the pivotal role of brokers in the success of public-private partnerships. Brokers act as mediators or facilitators between the public and private entities, bridging skill and capacity gaps, and providing technical infrastructure. Their involvement adds value to the collaboration by ensuring effective communication, negotiation, and coordination between the parties, ultimately leading to more successful outcomes.

In conclusion, public-private data partnerships hold tangible value in monitoring and achieving SDGs in the Global South. However, establishing successful collaborations requires time, trust-building, and the involvement of large firms with global reach. The presence of a legal framework and the role of brokers as mediators or facilitators are crucial factors that contribute to the success of public-private partnerships. By considering these elements, stakeholders can enhance their efforts in achieving sustainable development goals in the Global South.

Session transcript

Helani Galpaya,:
Thank you for joining us today. We have about 23 people in the room and 22 people online, so I think certainly quorum to get started. This session is on public-private data partnerships with a particular focus on the global south, the majority world. We are going to talk about the practical problems and possibilities around public sector and private sector and civil society working together using data to achieve various development objectives, particularly the SDGs. We know the argument that data is essential for understanding where we are in achieving the SDGs and sometimes to actually achieving the SDGs, so not just for monitoring it but for achieving it as well. We have two speakers next to me here, I will introduce each one very briefly because everyone’s bio is online, and we have four speakers online and an online moderator. So I will first invite Philip Schonrock, who is the director of CEPE, an independent think tank that works through field-based analysis and high-level advocacy to scale up the participation of Latin America and Caribbean within the global development agendas to set the stage about why on earth do we even need to talk about these public-private data partnerships. Philip, over to you.

Philipp Schönrock:
Well, good morning, Helene, to you a very early morning. Good morning from Cali, Colombia, and I’m happy to join you today. And I’m going right in to what my colleague Elani just mentioned, five think tanks and universities from across the regions, and especially the Global South, as it was mentioned. We came together with the support of IDRC to understand one main point, which was the extents of which the private sector data-related contributions to the public policy in the Global South are adding value. Very specifically, we will be talking about our exploration into three concrete phases. The extents, first, to which and how the private sector has contributed to the so-called data revolution of the SDGs. The second point is that their contribution to good data governance practices. And last but not least, we will be talking today about the challenges that we are facing, not only from the private sector, but especially from government and civil society, attempting to use and to work together on the sources of data. I believe, most importantly, to let my colleagues talk and see what we actually found out through a mapping exercise in five regions in eight case studies with companies and looking at how we actually are able to produce much better value if we monitor and document these actions. And I will, before giving over to Elania before, I believe it is important to mention that across all regions in the Global South, the most successful examples that we have found in public-private data initiatives are the ones which partners have invested. First time, efforts and needed to establish a proof-of-concept, build trust, and adapt and iterate the value. proposition over time. And I believe, Elani, I will turn over to you in mentioning the significant challenges to initiating that we had, completing and monitoring, and especially in scaling up private public data initiatives because the lack of coherence that we found across under development and standards operating procedures needed to develop them. So this is all from my side, Elani, I will hand over to you because a lot of our colleagues will be sharing their insights with us now. So over to you and thank you for giving me the floor. Let me ask you a quick rebuttal question before we move on to the next speaker. Wasn’t this, you know, started with a lot of hype that the private sector would be a huge partner in the SDGs in monitoring? So, I mean, are we, we’re still talking about it seven years from everyone collectively not achieving the SDGs. Is that why this is really important now? Or shouldn’t this have already happened? I mean, private sector should be working quite efficiently with public sector? Yes, we should, and it is not still happening on the, we are still on the hype, but not on the how. I believe we have had quite a lot of good examples, but what we have had is not an enabling environment where the private sector comes together with the other actors. I believe this is something we have seen throughout the last seven years that this hype has not brought together actually the data community and the tech community with the official statistics to close data gaps. And I believe the most important thing to say here is the hype has not remained. It has been losing ground and we are not closing still the data gaps that we need. Good examples are out there. We will show them, but they’re. it’s imperative that we have these partnerships, especially in the Global South, in order to help us in those data gaps like climate change, poverty, inequality, and that’s where we’re still missing the point. And like I said, we are not seeing the enabling environment at all levels in order to prevent.

Helani Galpaya,:
Thank you, Philippe. I’ve now asked Isuru Samratunga, who’s sitting on my right, your left. He’s a research manager at Learn Asia, which is a pro-poor, pro-market think tank working in South Asia and Southeast Asia. Isuru leads Learn Asia’s qualitative research work across a range of digital technology policy issues. What we’ve asked him to do is to frame the discussion by summarizing some of the findings of this Global South study to understand the state of play through some evidence. Thank you.

Isuru Samaratunga:
Yep, thank you, Helani. And hope you can see my slides. Yes. So actually, Philippe talked about like why public-private data partnerships are important in Global South. So let me take you through some of key findings of our research that we did on public-private data partnerships. So just for you to give you the context, I mean, from this study, we tried to explore the private sector involvement in data-related initiatives. Also then how those initiatives can have an impact on public policies as well. Again, we looked at like how these public-private data partnerships can be contributed to achieve SDGs as well as if you wanna monitor such achievements, how these partnerships can be helped as well. So our study spanned across five regions. in the Global South, so these are the countries and the regions that we covered. And so if I tell you a little bit about the methodology that we followed, so we had two work streams to gather data for this particular study. So first one was like structured mapping study of public-private data partnerships in the Global South. So there we covered 94 countries and also we found 394 data actions. So when I say data actions, that included things like capacity building and skill sharing and also like data collaboration, data governance, data mapping, things like that. And so the second part of this particular study was kind of a qualitative study that we did. So we selected a few, I would say like eight cases from our mapping study and we did in-depth case study based on our mapping study findings as well as, you know, thinking of the diversity of these data partnerships also. So that case, eight case studies also covered five regions. And so let me take you through some of key findings of this particular study. So we saw that not all the SDGs are important as same. So we saw across all the regions, climate actions, sustainable cities and communities, and also good health, well-being are the most prioritized or the common SDGs that were focused through these public- private data partnerships. And also if you take how these SDGs were prioritized by different regions, you see the eyes are different. For example, like in Africa, you see that good health and well-being has become the most important thing, and also in Asia, it is zero hunger and sustainable cities and communities. But if you go to Caribbean region, that’s obviously the quality education. But if you take LATAM and MENA regions, more or less it is climate actions that got the attention in data actions. So what we found from these work streams were mainly like, there is a tangible value or real-world value of these public-private data partnerships. So it helps you to monitor and also achieve sustainable development goals. So we did this study after the pandemic situation, and we saw really how the common thread or the crisis helped on, crisis made different parties to come into partnerships. If I’m giving you an example from Nepal, which we had an earthquake in 2015, where after the crisis, when the vulnerable communities, where they are those things, when the government wanted to find out, the data analytics helped in that front. But also successful partnerships can take time and it needs trust building. So because these partnerships sometimes need a lot of time because you need partnership building, and sometimes you need to have a dedicated person. from, it can be from the private company or from the government organization. And also we are seeing that large firms with global reach are better able to sustain such relationships. And also it is good if there is a legal framework that can be in those countries that can enable the partnerships. And also we saw that some partnerships mostly depend on personal kind of relationships. So we saw some partnerships hadn’t achieved the goals because of certain changes and also due to the informality of those partnerships. So we suggest kind of like a standard operating procedure that might help on those things. And also the government like to engage with private sector data that have an impact on multiple policy areas. For example, one study in Indonesia where we found that a big data analysis of public transport users in Jakarta city where that analysis helped the government to not only to plan their public transport but also to understand the demographics of the people who are using the public transport. For example, like the gender and also the vulnerability, the disabilities of these people and the persons who are using the public transport as well. So that helped a lot in many fronts. And so this brokerage role, it can help the public-private partnerships. When I say brokerage role, it is like as a mediator or kind of a facilitator, you connect the public sector and the private sector because there are like gaps, for example, like skill gaps and also capacity gaps as well. So these brokerage entities can fulfill those gaps and the bridge can be really success and also can provide good insights into data and analytics as well. So also, again, like providing technical infrastructure is also important. That also can be provided by the brokers in certain locations. So these are the main findings that we can share with you from our study, but happy to answer any questions if you have. Thank you.

Helani Galpaya,:
So Isuru, I’m guessing when you did this mapping study, companies and initiatives don’t go and say, oh, we’re working on this SDG. They are working on some partnerships, right? So what do you do? You go and try and read through it and try and assign some SDG, like in that chart that you showed us? Yeah, exactly. So we saw that some organizations, they do a lot of good work, but they don’t know that it can be contributing to one or two or multiple SDG. So in that case, we need to, I mean, from research perspective, if I say, we had kind of a definition on how to identify these SDGs and their contribution to certain achievements and all that. So we did that classification and see where the most of outcomes of those actions are aligned with SDG, and then that’s how we did that. I’m going to come next to… to Mike Flanagan, who is joining us online. He is the corporate vice president at Microsoft, and he leads customer success and services globally for Microsoft’s commercial customers and products. And Microsoft is one of the companies that came up repeatedly in some of the studies, I think, that you were showing. Interestingly, not just about sharing data, but doing a lot more of the data action, setting the standards, providing infrastructure, providing capacity building, and so on, and across all the regions. So Mike, you don’t run a philanthropy, you run a commercial business. How does Microsoft, and perhaps other organizations, but certainly yours, how do you look at this trade-off between generating revenue and what looks like philanthropic activity, which is a government or a civil society organization comes knocking and says, we’ve got a problem, your data can really help us understand where we are or solve that problem?

Mike Flannagan:
I think for us, partially it starts with a culture and the belief that companies that can do more should do more, and certainly Microsoft is in a very fortunate position in terms of our ability to do more. You’ll notice in all of our disclosures, for example, in corporate social responsibility, we do actually map a lot of the work that we do back directly to the SDGs. And because of the way in which we track our work against those, we are not only very proud of what we’re able to do, but we’re able to track a lot of the work that we do back to the direct impact. I think in the past fiscal year for us, we supported nonprofits around the world with nearly $4 billion in discounts and donations, which many technology. providers do. But going beyond that, we also have created the Microsoft Cloud for nonprofits, which is bringing all of our product capabilities together, but around the common data model for nonprofits that brings together data sources for the purposes that most nonprofits need. Things like attracting and growing their donors, delivering their programs at scale, engaging with their audiences. These are things that are common and require a common data model. But we pair that work that we’ve done around the data and the data modeling with discounts and such so that not only are we providing commercially more approachable technology solutions, we’re doing that in a way that also empowers through common data models it to be easier for nonprofits to achieve their mission with a lower overall investment from their organization. And so for Microsoft, we don’t see them as mutually exclusive. We believe that a lot of the work that we do in enabling the commitments that we make around philanthropy actually not only does good for the world, but ultimately helps our commercial objectives. And then we also, as I mentioned, believe that through our commercial objectives, we have a responsibility to give back. One of the things that we talk a lot about is data. And I mentioned some of the data modeling that we’re doing. I think over the past three years, Microsoft has launched 23 different collaborations around data across nonprofits, universities, companies, and governments that help promote access to data. One of the things that we’ve learned from that is that While open data is really important for impact, data doesn’t always have to be fully open in order to be useful. Sometimes even if data can’t be made public due to privacy or commercial sensitivities, there are ways that that data can be used in a more open way so that we can break down some of the silos. I think that’s one of the areas in which we need to continue to do more.

Helani Galpaya,:
And one of the things I think, given Microsoft’s sort of size and large market shares, what you say makes total sense because eventually, even if the market captures in 10 years, you can afford to make those kinds of investments. But I’m curious how it works inside the company. At a large company, the marketing guys are probably on a quarterly sort of bonus or annual bonus scheme, right? The other feedback of the long-term market opening up, because let’s say you’re going to a relatively low digital connectivity country, eventually you will gain high market share for Microsoft products by helping this country achieve some of the digital SDGs. But that’s a sort of a different time frame. Is there a conflict of incentives inside among different sort of business units? And how do you deal with it?

Mike Flannagan:
I mean, of course, there are the realities of the obligations that we have to our shareholders, but part of what we are extremely clear about is that our shareholders expect that, by and large, economic growth must be inclusive. So we don’t hear from our shareholders that it’s all about profit. And so our executives and our people in the way that their compensation is structured also do not hear that it’s all about profit. We believe, of course, that we have responsibilities commercially, but that as we achieve our economic growth as a company, that must be inclusive. We have to help individuals and organizations and communities to succeed because ultimately, if we are doing the things that we need to do, the commercial results will come from that. But also, we will have a world that is more equitable and better for all of us to operate in. If I think about the skills that we need to build around the world, we have a huge gap in skills today around technology, in general, around data in general, cybersecurity and AI are particularly acute areas of need, where if we don’t help with building those skills and helping train people for the jobs of the future, ultimately, we won’t have the people that we need to do the work that fuels our future growth. We see a lot of those investments not only as good for the world but also good for the future of the company. I think those short and long-term objectives can be balanced by commercial organizations.

Helani Galpaya,:
Thank you, Mike. I’m going to ask Darlington Okongo who is joining us online. He’s the founder and CEO of many companies, including Mino Health AI Labs and others that are working in the AI space. He also interestingly is lead, for example, he’s lead for the topic group on AI for Radiology at the ITU and the WHO focus group on artificial intelligence and health. So, he straddles this very specific AI in a local market, as well as this global multilateral data, health and agriculture kind of roles. So, given these two roles, Darlington… and if you can hear me, could you talk about the biggest sort of challenges in forming partnerships? We know AI is driven by data and your companies may also be producing data. What are the challenges and are they different at a very local national level versus maybe at an international level where you’re trying to do something for the globe? Thank you.

Darlington Ahiale Akogo:
Right, I mean, thank you for having me. That’s a very interesting question. So the starting point is this, public sector, say government, usually have the reach and the assets. So, you know, if you’re looking at the government of a country and you want to do something in agriculture, for example, we did a project in agriculture. The government has extension officers in every district of the country. You in the private sector probably don’t have that reach or it takes a very long time to build that reach. What the private sector really offers, especially if you’re looking at startups, is innovation. So they can come up with how to, you know, better leverage that assets to create new data, create solutions out of it. So that’s the major opportunity that lies there. Now, the challenges is that because, you know, public sector is different, private sector is different, usually there’s a language barrier. And what I mean by that is, you know, public sector communicates a certain way. They go about things in a certain way. They are certain, you know, procedures, which for private sector entity can be quite frustrating to deal with or even understand. And sometimes it feels like, you know, you’re speaking different languages. And so that’s the largest challenge I see. Sometimes too, you need to make it clear what are the incentives from both sides. Because if it’s not clear, then it leads to this. on problematic back and forth, where the government has a certain angle to what they want from the partnership, private sector wants a certain angle. But the language barrier and all of this can usually be solved if you have a facilitating person or entity. So for us, for example, we have someone who has had years of working within that space who were the regional director of AstraZeneca and have done a lot of PPP partnerships. And so when we brought them in, they facilitate all the public sector engagement for us, but also sometimes you can have international development agencies that are funding projects that require PPP. In that case, they can facilitate that kind of relationship because they have extensive years of working with private sector, as well as extensive years of working with government. But to highlight just two of this success, so I mentioned the one in agriculture, we formed a partnership that was actually both government and then public university, and then ourselves. And we’re able to collect data across every single region of Ghana, which is about 16 regions. And then we replicated this in four different African countries. And so this was for disease and pest data sets, which was a project funded by Lacuna Fund. And this data set was then used to build AI solutions to help farmers. And out of this, we ended up creating the largest disease and pest data sets in the world. It wouldn’t have been possible if we didn’t have that PPP. And in healthcare, we built AI systems now that are able to interpret medical images to support detection and everything. And we’ve now formed partnership with public health institutions that are giving us access to about a hundred years of data on Africans. And we are using this data now to build large language models, something similar to let’s say GPT-4, but then for healthcare, for radiology, all the other sectors. combined. And outside of that, we wouldn’t have had access to that data because, you know, you need a public institution that has been around for years to then give you access to that data. And then, you know, you find out how it can be beneficial to them. So we build this AI solutions. Sometimes the public institutions that give us this data use the AI stuff for free without having to pay us. So there has to be clear, objective, clear incentives. And I’m guessing this is anonymized and not personally identifiable data, but that would be one of the policy challenges, particularly in the health sector, less so in agriculture. Do you come across this? And how do you deal with that kind of problems? Policy? Exactly. Yes. So one starting point is data protection laws. You have to look at the data protection laws within the country you’re operating in. And for most of them, you know, they would usually tell you P. So PII should not be something that, you know, you should be dealing with. You should try to anonymize the data. And so Ghana, for example, has had data protection laws for several years now. And it’s quite clear what you can do. You need a consent of users if you need to, you know, assess certain data, even if you anonymize it and everything. So, yeah, you have to look at those. Those laws are important. But the issue sometimes is that some countries don’t have data protection laws. But just to be on the safe side, just take the standard. If you want to use GDPR, which is one of the most, you know, well-defined versions, you can take it and leverage it. And just, you know, in the future, you don’t want to create issues because in healthcare, for example, you are building solutions that are beneficial to people. But at the same time, you don’t want to have to do it in a way that is irresponsible because at the end, it will cause damages to even your good intentions become muddied by bad outcomes. So just to be on the safe side, even if there are no regulations that are preventing you from doing it, just do the right thing. It saves you in the long term.

Helani Galpaya,:
Thank you, Darlington. We now come to Dr. Mona Dimaidi, who is an entrepreneur and women’s rights advocate and from the Anarja National University in the state of Palestine, who is now joining us online. Mona, you are also an AI researcher. What do you think, I mean, given your sort of professional as well as the academic background, and you’ve worked in multiple countries, right? What do you think are the big challenges in international collaborations on data sharing, particularly when it comes to AI research and applications?

Mona Demaidi:
Thank you so much for your question. I’m very happy to be with all of you today. So I’ll start by saying that we all know that AI research is based on data and having access to data. So international collaboration and data sharing is super important to achieve such good research. So the way I see it is that it’s super important to have that kind of collaboration and data sharing for many reasons. And I’ll start by having access to diverse data resources. So international actual collaboration could help us understand more the local context of different countries, understand more the local challenges of each single country and to work around it. The other main important part in international collaboration is having that kind of pooling resources. So AI research in general, it needs a lot of computational resources, talent, power, and having that kind of sharing resource is super important. Another one important aspect is the cross-cultural understanding. So… The way we work now in terms of research, each country is actually focusing on its own challenges, trying to capture the data in its own context. However, having that kind of international collaboration is super important to ensure that our research is actually getting into a global aspect. And it’s also addressing challenges, which we’re gonna be surprised that most of us are facing it regardless of the countries we’re visiting, living in. So it’s very important to have that kind of cross-cultural understanding. The other main important part of having that kind of international collaboration is the ethics part. Like most countries now that they have the AI mission strategies coming out and AI policies. And one thing that we’re still struggling at is the ethical part. How will we ensure that we’re actually addressing transparency? How are we ensuring that we’re addressing gender equality regarding the data and the resources we’re working on? So having that kind of international collaboration will help us address such an issue. And it’s also gonna help us in terms of data privacy and security. So you asked me at the challenges and obstacles. I think what we’re still facing in terms of research and international collaboration is there is still no structured pipeline between these countries on how to do the data sharing, what kind of policies and data security things we need to focus on. How are we addressing ethics? What kind of data could we work on? You know, like each country has its own legal framework and its own policies. So how could we ensure that we’re actually interacting with each other in a very transparent way and we’re using the data also and deploying the data in a way which actually takes into consideration different aspects of the data. So I think these are the main challenges I’m thinking about. However, again, there’s a lot of opportunities actually in having that kind of international collaboration.

Helani Galpaya,:
So interestingly, I mean, you know, there are data protection laws that are coming up in some countries without any exceptions for taking the data out even. even for research purposes without journalistic exceptions and so on, you just can’t transfer that. So who do you work with across borders? Is it other researchers? And also in terms of techniques, does that mean you have to use sort of sophisticated, federated learning kind of things which keep the data in the countries but still allows you to use it? Or do you have to get special permission to transfer data across the countries?

Mona Demaidi:
So in the MENA region context, we work mostly with researchers from other countries. That’s the main part. The main issues we’re still facing is that a lot of the MENA countries in the MENA region, they still don’t have that kind of legal framework related to data privacy and protection and data access. So this is still a huge challenge. So the way we actually play around it is either we have that kind of consent agreements with the private sector we’re working with. That’s one way to look at it. And another thing is that we usually do the analysis and deployment on their own frameworks and platforms without actually taking the data out. And one main issues we’re still facing is also the lack of awareness in terms of, so yes, we want to do that kind of research where we go and approach the private sector in the MENA region. There is still a lack of awareness on what kind of applications we could apply the AI on. They’re not that much comfortable in actually providing us with their data. And even if they do, they’re super cautious on how the data is gonna be used, how it’s gonna be beneficial from them. The good news is that we did some kind of, I’ll say very good proof of concepts in terms of having international collaboration in the MENA region. So recently we actually deployed an AI bootcamp across the MENA region in which we brought the governmental, private sector and international experts all together in one small platform to give us more understanding about where we’re going, what is the current challenges from the governmental and private sectors, how we could play around, what kind of rules and regulations are we still missing and all of that. And the good news is that there’s a lot of. of promises coming from these sectors. However, we still need to consider the ethical and legal framework in a more cautious way, especially in the MENA region.

Helani Galpaya,:
Thank you, Mona. We finally come to Rodrigo. So Rodrigo, we heard about the really important role that sort of brokers play in the stuff that Isuru was presenting. And we certainly saw in one of the case studies that he was referring to, like Pulse Lab Jakarta was part of the UN system, had enormous convening power and brought together the government of Indonesia who were data users and wanted to understand what was happening in the country, and private sector partners, including telecom companies, social media companies, et cetera, and brokered the deal and also provided the expertise. So it sounds to me a little bit like, you know, somewhat of your role. So you’re the senior program manager at the Trust for Americas, a not-for-profit affiliated with the Organization of American States, which is an intergovernmental body. So in your experience, what does it take to actually bring these partnerships into life and how do you approach this?

Rodrigo Iriani:
Yes, thank you. And thank you again for inviting us and considering the Trust for this case study. So first, for understanding a little bit better my points, we need to have a context of the Latin American, the Caribbean region as one of the most unequal regions in the world that faces different challenges around the data ecosystem and across different countries, especially in the Caribbean. So the data ecosystem in Latin America has relatively little participation from the private sector, and that is something evident in all the regional conferences around open government and open data. So I would like to just make three points or highlight the three main findings. of the case study that resonates the most with the work we are doing in the region. The first one is one of the points that Philip just mentioned when he started the presentation. The most successful examples of public-private data initiatives are one in which the partners have invested time and effort to establish a proof of concept, build trust and adapt and iterate the value proposition over time. This is especially true for us at the Trust, as we have a distinctive operating model that focuses on the final beneficiary in vulnerable communities, and this is especially important in the fact that Mike just mentioned in terms of tackling the gap in skills that we face in the region. So we as a non-profit organization affiliated with a multilateral organization as the Organization of the American State, we try to implement different projects aligned to the mandates of development, human rights and democracy. But we also have a strong DNA in the private sector component. We as an organization were created as part of the OAS, but as an arm of the private sector to participate in development initiatives. So that is why I think it’s important in the case study where you mentioned the importance of this convening organization and their role in mobilizing private sector actors. One of the main examples that we have is the democratizing innovation in the Americas Program that mainly focuses on capacity building in digital skills and data literacy, promoting co-creation processes and the development of local government. local solutions to local problems. And this specific initiative has been supported by private companies such as Microsoft for the last couple of years, Citi Foundation, but also has an important role in terms of involving local government, local private sector, civil society organizations, and academia as well. In this past nine years, we have impacted over 11,000 beneficiaries, mainly youth, but also representative from civil society, private sector, and other stakeholders. The final two highlights that I would like to mention and that resonates a lot with the work we do in the region is the established relationship of powerful factors in private sector engagement and mobilizing initiatives. And there is a need of active government and private sector initiatives that provide connectivity between this digital and data skills demand, the capacity building, and the employability. Recently, we have noticed a shift in terms of philanthropic support and in terms of the development projects where mainly multilateral development banks, for example, requires more synergy between the private sector, the government entities, and civil society. I think we have a very strong role there in convening all these different partners for the Sustainable Development Goals. Finally, to maybe just mention a specific example in Jamaica and I know we have Dr. Minot on there in the Zoom call that can attest to this effort, is one of our projects called Unleashing the Potential of Jamaican Youth through Empowerment and Training, where we partner with the. Inter-American Development Bank, Microsoft, the National Commercial Bank in Jamaica. We collaborate with the Ministry of Education and Youth and local stakeholders in Jamaica to train 1,500 youth in digital skills and data literacy.

Helani Galpaya,:
So does this make it easier because you guys are a regional organization? Like when you approach a private sector company, are you saying, you know, we’ve got five countries that can use your data? So you’re not at a country-level negotiation or does that not make a difference?

Rodrigo Iriani:
I think that the platform we established in the past 25 years, the relationship we have with multinational organization, with government, local governments, national governments, with other big or multinational or multi-Latina companies, I think that kind of platform allows us to have or generate trust in terms of getting new partners on board. And that has been one of the main strengths or assets that we have as a nonprofit. However, recently and mainly after the pandemic, we also noticed that this is not always evident in terms of supporting or philanthropic donations. This kind of second layer foundations as the trust also faces different challenges in terms of gathering new funding because many of the private sector or other international donors don’t cover, for example, operational costs. So that also required adaptation and try to adapt, well, basically adapt the model to continue impacting the communities we serve.

Helani Galpaya,:
Thank you. We are open for questions. We have. We have two already, I think, on the chat, and we are happy to take from the audience here as well. But while you gather your thoughts, let’s take the first question from my friend, Naina. Can somebody unmute her, please?

Audience:
I hope someone can hear me, I still can’t have my video on. We can hear you clearly. Thank you. So this has been a very insightful panel, and I’m happy I hopped in. Helani, here is where I’m thinking. In as much as we might want to talk about partnerships between private, public, or at international levels, my question is at the national level. Because that is where the rubber hits the road. What kind of national partnerships between private sector, civil society, and governments are needed for us to have a robust data economy? So my question is on national partnerships. Because for me, that is where the rubber hits the road. Whether it is private data, whether it is government data, open data, what kind of partnerships do we need at national level? Thank you.

Helani Galpaya,:
Would any of the panellists like to take the question? National level partnerships?

Rodrigo Iriani:
I can jump in. really quick. In terms of national partnership and private sector data, what has worked a lot for us, and based on some activities we developed with a regional perspective in terms of using data for social and economic development, such as hackathons or other initiatives where youth can use data to create their solutions. It has worked a lot in terms of partnerships, working between the private sector and national ministries, depending on the topic, but since inception in terms of defining the data sets and being very specific on where we want to focus on the solutions. There’s always a challenge in terms of creating specific outcomes in terms of the use of the data, so since the inception I believe that national ministries are important to be involved in the discussion. If I may add, I think whether it’s a ministry

Helani Galpaya,:
or whether it’s a trusted third party or a not-for-profit, but I think somebody needs to be involved because a lot of the time the value of this data is if multiple companies come together and multiple government departments come together, and particularly if companies are coming together, commercial sensitivity arises. So you know the classic example of taxi data, you know, in a country let’s say like Sri Lanka, they will, you know, Uber will have Uber’s data, the two local companies will have their own data, but it’s in the pooling that’s really valuable for each of the other companies as well. But then there’s commercially sensitive data, so who is going to be trusted, whether it’s a government entity and so on, I think. We have another question. One of the panellists wants to respond to Nena’s question from Ayaleve Shabeshian. I’m sorry, I might be mispronouncing your name. If you can speak or Maurice, our online moderator, can read out her question. Maurice, you want to go? Okay, Ayaleve can’t unmute, so Maurice, you will have to be it. Okay, can you hear me? Yes, we can.

Audience:
Thank you so much for giving me this opportunity to just read my question. I’m sure I joined this IJF in Addis Ababa, Ethiopia last year. And then I was advocating, asking questions for the Secretariat to have international standards. So what we do now is we have partnership, private and public partnership. Every country has different data. Every country has different regulation and rules. And every country has also different licensing. So in terms of accessible data, which is information, I think we need to set up international standards at least. So what we see now is each country has tried to tackle with the local government and then also some of the sponsors. And as we observed, there is a big issue. Some of the obstacles of the private company to access the government data, the government is not allowing them to do what they want to do. So that’s actually a great obstacle for the improvement of the technology. So my understanding, my question is now, how we set up a standard for international standards like United Nations, IGF, and at least for the basic framework, then follow that and by the two bylaws, the country law, international law, or standards, and then they can at least start up basic standards all over together, especially South-South cooperation will be important in the future, and we need to concentrate and set up the rule, otherwise it will be distorted data set up and use of the technology improvement. So what are the initiatives? My question is, what are the initiatives from this IGA is actually set up for the internet governance forum, international governance law? That’s my question. So thank you so much.

Helani Galpaya,:
Thank you for your question. I mean, there’s certainly multiple discussions around cross-border data transfer, and I think the answer in part probably depends on the sensitivity of the data. It also probably screams for a government of Japan representative on this panel to talk about cross-border data transfer with trust, which is one of the initiatives that they have proposed, and there was actually a panel on it yesterday. But to the panelists about international standards to share data across, what can we do? I think Mona, you kind of touched upon that a little bit, but is there any activity at an international level happening on this?

Mona Demaidi:
So I could cover actually the MENA region context. So what we’ve been doing recently, and let’s say an international team which is comprised from the MENA region, representative countries, they actually came together to come with something called, they started with something called the AI ethics in the MENA region, then they created the strategy, and now they’re working more on the data itself. So the way I would say that we, I think we all still baby steps everywhere. And I would say that, and people are still figuring out, and countries, governments are still figuring out on which aspects we need actually to focus on more, especially when it comes to data sharing. And again, as you said, the sensitivity is something that’s still super, super important for us to focus on. And I would say generally that things are moving. We’re still not moving very fast. However, I could see in the MENA region, for example, as I said, most countries now they have their strategies, now most countries now actually working on the ethics, more countries now are seeing how they could start working on the data sharing component. So to be honest, I don’t have the, let’s say the accurate answer for such a question, but some movements are working there, and I will see something coming in the upcoming two years, hopefully.

Helani Galpaya,:
Thank you, Mona. Maybe Donington want to add something. Thank you. I think we are nearly at the end of the session. There’s a comment from the MENA region about how the data philanthropy has to be fully participatory and inclusive, and a really interesting comment from Weiyu online about, which says the fact of private data partnerships is that big techs treat different countries differently. For instance, they give data access to US academics, but not Singaporean academics. The Global South is again at a disadvantage, and we need to do something to change this imbalance. The previous question on international standards is very relevant here. I would very much agree. And I think we’re out of time, but there is a question from a colleague from Ghana, and I’m gonna read it given a small amount of time. With recent acceptance of technology and management systems in Africa, especially in the health sector, what would be your advice? all these African countries in terms of interoperability and sharing of data in the absence of regional data protection laws. So I think along the same lines really important. I know there are sort of regional initiatives that are happening, but perhaps not fast enough. And the best example from the Central Bank Digital Currency International Data Transactions, SWIFT versus BRICS, as an international standard has been pointed out. So that’s just some of the summary. Two things we would ask of you and the panelists. The first is a very quick two-minute poll. If you can go to slido.com, www.slido.com. Isuru, if you can project that. And you enter this code and there’s one question we would like you to answer. If you go to slido.com and enter 2763179, the question is, the main reason preventing private sector sharing their data that can help monitor achieve the SDGs is, three of these options. Which one? And while everyone does that, I’m going to ask the panelists a question. Whoa, lack of incentives. Okay. Unified on the lack of incentives to do so. Not so much. Okay. While it’s going on live, could the panelists think about, if you had one ask of, in the case of the private sector, if you had one ask of government, if you could change one thing that they do differently, what would it be? That’s a question to Mike and to Darlington. What would you ask them to do differently when it comes to public-private data sharing? From Rodrigo, I’m going to ask, what would you ask private sector to do? differently and government to do differently, one thing. And the same question from Mona. Thank you everybody. So the broad consensus, 75 roughly, three-fourths, the lack of incentives. The second is low capacity of governments and a few responses for policies that actually prevent data sharing. Thank you. Closing round, one-minute interventions. I will start with Mike. Well, first let me just

Mike Flannagan:
say I think the collaboration between our organization and governments around the world is generally quite good and we very much appreciate the partnerships that we enjoy with governments around the world to solve some of our very difficult challenges. I think someone mentioned earlier international standards for data protection. Certainly I think it would be helpful for everyone if we had more global standardization on data privacy, data protection. That would make it much easier to operate on a global scale. I’m not sure if it’s me, I’ve lost the ability to hear you.

Darlington Ahiale Akogo:
Thank you, Mike. I was asking Darlington, what’s your one ask? My biggest ask will be that governments should have the political will to form these partnerships. It makes all the difference. The difference between having a few agencies within government willing to fund PPPs and how far this PPPs go usually tends to be on the political will, and it makes all the difference. So more governments across different countries, more agencies should internally have that bind. They should see the benefit of this partnership, how it could lead to solving the toughest challenges that are within the country, either by using data to better understand them or building AI solutions with that data that can help address those problems. But there needs to be that political will internally to make it happen.

Helani Galpaya,:
Thank you. Mona, I’ll come to you next.

Mona Demaidi:
Yeah, for the governments, I will say that they have to work on establishing a governance structure to ensure that everybody’s involved and to push having data shared. For the private sector, simply I would say, and that’s based on my experience, they need to work more on understanding how AI could help them and understanding the importance of actually structuring and labeling their data and making it usable for everybody to use.

Helani Galpaya,:
Thank you. Rodrigo. Thanks.

Rodrigo Iriani:
For the private sector, I would say, to be more flexible and open, we’re working with government entities and share best practices as they operate very differently. And for the public sector, I would say, to strengthen the capacities and create a process inside the government to develop and establish a data culture in the public offices.

Helani Galpaya,:
Thank you. So political will, capacity and data culture within government. Private sector to be a lot more willing to collaborate because they work at different sort of time scales. And across countries and international agreement on how we can share data across borders and international privacy and related laws and protections so we can sort of without worrying, enter into partnerships. Thank you to the panelists and Isuru and as the presenter, thank you to the online audience. and the in-person audience, thank you very much. Enjoy the rest of IGF. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

123 words per minute

Speech length

463 words

Speech time

226 secs

Darlington Ahiale Akogo

Speech speed

183 words per minute

Speech length

1223 words

Speech time

402 secs

Helani Galpaya,

Speech speed

158 words per minute

Speech length

2549 words

Speech time

966 secs

Isuru Samaratunga

Speech speed

125 words per minute

Speech length

1017 words

Speech time

490 secs

Mike Flannagan

Speech speed

157 words per minute

Speech length

919 words

Speech time

351 secs

Mona Demaidi

Speech speed

210 words per minute

Speech length

1290 words

Speech time

368 secs

Philipp Schönrock

Speech speed

167 words per minute

Speech length

735 words

Speech time

265 secs

Rodrigo Iriani

Speech speed

138 words per minute

Speech length

1119 words

Speech time

487 secs