Data Protection for Next Generation: Putting Children First | IGF 2023 WS #62

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The discussion focuses on the necessity of age verification and data minimization in relation to children’s rights in the digital environment. It is argued that companies should not collect additional data solely for age verification purposes, and trust in companies to delete data after verification is considered crucial to protect children’s privacy.

Another important point raised in the discussion is the need for the early incorporation of children’s rights into legislation. The inclusion of children in decision-making processes and the consideration of their rights from the beginning stages of legislation are emphasized. This is contrasted with the last-minute incorporation of children’s rights seen in the GDPR.

The discussion also advocates for the active participation of children in shaping policies that affect their digital lives. Examples of child-led initiatives, such as Project Omna, are mentioned to illustrate the importance of including children’s perspectives in data governance. The argument is made that involving children in policy-making processes allows for better addressing their unique insights and needs.

The role of tech companies is also explored, with an argument that they should take child rights into consideration during their product design process. Collaborating with tech companies to develop age verification tools is suggested as a means of ensuring the protection of children’s rights.

Additionally, it is noted that children, often referred to as “Internet natives,” may have a better understanding of privacy protection due to growing up in the digital age. This challenges the assumption that children are unaware or unconcerned about their digital privacy.

The discussion concludes by highlighting the advocacy for education and the inclusion of children in legislative processes. Theodora Skeadas’s experience in advocacy is mentioned as an example. The aim is to educate lawmakers and involve children in decision-making processes to create legislation that better safeguards children’s rights in the digital environment.

Overall, this discussion underscores the importance of age verification, data minimization, the incorporation of children’s rights in legislation, the active participation of children in policy-making processes, and the consideration of child rights in tech product design. These measures are seen as vital for protecting and promoting children’s rights in the digital age.

Edmon Chung

The discussion revolves around various important topics related to internet development, youth engagement, and online safety. Dot Asia, which operates the .Asia top-level domain, plays a crucial role in these areas. In addition to managing this domain, Dot Asia uses the earnings generated from it to support internet development in Asia. Moreover, Dot Asia runs the NetMission program, which aims to engage young people in internet governance. These initiatives are viewed positively as they promote internet development and youth engagement in Asia.

Another significant development is the launch of the .Kids top-level domain in 2022. This domain is specifically designed to involve and protect children, based on the principles outlined in the Convention on the Rights of the Child. By prioritizing children’s rights and safety, the .Kids initiative aligns with the principles of the convention. This positive step highlights the importance of involving children in policy-making processes that affect them.

Cooperation among stakeholders is emphasized for ensuring online safety. Various forms of online abuses and domain name system (DNS) abuses exist, requiring collaborative measures to create a safer online environment. The .Kids top-level domain is seen as a valuable platform to support online safety initiatives. By creating a dedicated space for children, it can contribute to the development and implementation of effective online safety measures.

The discussion also focuses on privacy, particularly in relation to data collection and age verification. Privacy is not just about keeping data secure and confidential but also about questioning the need for collecting and storing data in the first place. The argument is made that data should be discarded after the age verification process to strike a balance between protecting children and safeguarding their privacy.

The use of pseudonymous credentials and pseudonymized data are suggested as appropriate approaches for age verification. These methods allow platforms to verify age without accessing or storing specific personal information, addressing privacy concerns while still ensuring compliance with age restrictions.

Additionally, it is highlighted that trusted anchors should delete raw data after verification, and regulation and audits are necessary for companies that hold data. The importance of building the capacity for child participation in internet governance is also emphasized. These factors contribute to creating a safer, more inclusive, and child-centric online environment.

In summary, the discussion focuses on various important aspects of internet development, youth engagement, and online safety. Dot Asia’s initiatives and the introduction of the .Kids top-level domain reflect positive steps toward promoting internet development and protecting children’s rights. The importance of stakeholder cooperation, privacy considerations, and child involvement in policy-making processes are also highlighted. By addressing these aspects, stakeholders can work together to create a safer and more inclusive online space for all.

Sonia Livingstone

The discussions revolved around the significance of safeguarding children’s right to privacy in the digital realm and its interlinkage with other child rights. It was emphasised that children’s privacy is essential as it directly influences their safety, dignity, and access to information. Sonia Livingstone, an expert in the field, played an instrumental role in the drafting group for general comment number 25, which specifies how the Convention on the Rights of the Child applies to digital matters.

Furthermore, it was noted that children themselves possess an understanding of and are actively involved in negotiating their digital identity and privacy. To understand their perspective, a workshop was conducted by Livingstone to gauge how children perceive their privacy and the conditions under which they would be willing to share information globally. It was found that children universally recognise the importance of privacy and view it as a matter that directly affects them.

The introduction of age-appropriate design codes, tailored to cater to a child’s age, was highlighted as an effective regulatory strategy to protect children’s privacy. These codes have been implemented in various international and national settings, ensuring privacy in accordance with the child’s developmental stage. Livingstone, alongside the Five Rights Foundation, spearheaded the Digital Futures Commission, which sought children’s views to propose a Child Rights by Design approach.

Addressing the identification of internet users who are children for the purpose of upholding their rights online was identified as another crucial aspect. Historically, attempts to respect children’s rights on the internet have failed because the age of the user was unknown. It was emphasised that a mechanism is needed to determine the age of each user in order to effectively establish who is a child.

Regarding the implementation of age verification, it was suggested that a new approach is needed, involving third-party intermediaries for age checks. These intermediaries should operate with transparency and accountability, ensuring accuracy and privacy. However, it was acknowledged that not all sites and content necessitate age checks, and a risk assessment should be conducted to determine the appropriateness of such checks. Only sites with age-inappropriate content for children should require age verification.

The role of big tech companies in relation to age assessment was also discussed. It was posited that these companies likely already possess the capability to accurately determine the age of their users, highlighting the potential for collaboration in ensuring child rights protection online.

Furthermore, the importance of companies adopting child rights impact assessments was stressed. Many companies already understand the importance of impact assessments in various contexts, and embedding youth participation in the assessment process is seen as crucial. Consideration should be given to the full range of children’s rights.

There were differing perspectives on child rights impact assessments, with some suggesting that they should be made mandatory for companies. It was argued that such assessments can bring about significant improvements in child rights protection when integrated into company processes.

The active involvement of children and young people in the development of data protection policies was also highlighted as a key recommendation. Their articulate and valid perspectives should be taken into account to ensure effective policy formulation.

Finally, the importance of adults advocating for the active participation of young people in meetings, events, and decision-making processes was emphasised. Adults should actively address the lack of youth representation and ensure that young people have a voice and influence in relevant discussions.

In conclusion, the discussions centred on the necessity of protecting children’s privacy in the digital environment and its alignment with other child rights. Various strategies, including age-appropriate design codes and third-party intermediaries for age verification, were proposed. The involvement of children, youth, and adults in policy development and decision-making processes was considered pivotal for effective protection of children’s rights online.

Emma Day

Civil society organizations play a crucial role in advocating for child-centred data protection. They can engage in advocacy related to law and policy, as well as litigation and regulatory requests. For example, Professor Sonia Livingstone’s work on the use of educational technology in schools and the launch of the UK’s Digital Futures Commission highlight the importance of civil society organizations advocating for proper governance of educational technology in relation to children’s data protection.

Litigation and making requests to regulators are another important avenue for civil society organizations to advance child-centred data protection. This is evident in cases such as Fair Play’s complaint about YouTube’s violation of the Children’s Online Privacy Protection Act, which resulted in Google and YouTube paying a significant fine. These actions demonstrate the impact civil society organizations can have in holding tech companies accountable for their data protection practices.

Community-based human rights impact assessments are crucial for ensuring child-centred data protection. This involves consulting with companies, working with technical and legal experts, and including meaningful consultation with children. By involving children in the process, civil society organizations can better understand the implications of data processing and ensure that their rights and interests are taken into account.

Civil society organizations should also involve children in data governance. Involving children in activities such as data subject access requests can help them understand the implications of data processing and empower them to participate in decision-making processes. Additionally, auditing community-based processes involving artificial intelligence could involve older children, allowing them to contribute to ensuring ethical and responsible data practices.

Education about data processing and its impacts is crucial for meaningful child involvement. It is important for people, including children, to understand the implications of data governance for their rights. Practical activities, like writing to a company to request their data, can be incorporated into education to provide a hands-on understanding of the subject.

Civil society organizations need to collaborate with experts for effective child involvement. In complex assessments, a wide range of expertise is required, including academics, technologists, and legal experts. By collaborating with experts, civil society organizations can ensure that their efforts are based on sound knowledge and expertise.

Age verification should not be the default solution for protecting minors’ data. Other non-technical alternatives should be investigated and considered. Different jurisdictions have differing views on the compliance of age verification products with privacy laws, highlighting the need for careful consideration and evaluation of such solutions.

In efforts to protect children’s data, it is essential to centre the most vulnerable and marginalised children. Children are not a homogeneous group, and it is important to address the varying levels of vulnerability and inclusion across different geographies and demographics.

Designing products for the edge cases and risky scenarios is crucial for digital safety. Afsaneh Rigo’s work on inclusive design advocates for designing from the margins, as this benefits everyone. By considering the most difficult and risky scenarios, civil society organizations can ensure that digital products and platforms are safe and accessible for all.

In conclusion, civil society organizations have a vital role to play in championing child-centred data protection. Through advocacy, litigation, regulatory requests, human rights impact assessments, involvement in data governance, education, collaboration with experts, exploring non-technical alternatives to age verification, considering the needs of the most vulnerable children, and designing for edge cases, these organizations can contribute to a safer and more inclusive digital landscape for children.

Theodora Skeadas

The discussion revolves around several key issues related to children’s data protection and legislation. One focal point is the importance of understanding international children’s rights principles, standards, and conventions. The UN Convention on the Rights of the Child features prominently as a widely ratified international human rights treaty that enshrines the fundamental rights of all children under the age of 18, serving as a foundational document in safeguarding children’s rights.

Another significant aspect highlighted is the need for appropriate data collection, processing, storage, security, access, and erasure. It is emphasized that organizations should only collect data for legitimate purposes and with the consent of parents and guardians. Moreover, these organizations should use children’s data in a way that is consistent with their best interest. Implementing adequate security measures to protect children’s data is also underscored as crucial.

Consent, transparency, data minimization, data security, and profiling are identified as major issues surrounding personal data collection, processing, and profiling. It is mentioned that children may not fully understand what it means to consent to the collection and use of their personal data. Additionally, organizations may not be transparent about how they collect, use, and share children’s personal data, making it difficult for parents to make informed decisions. The over-collection of personal data by organizations is also highlighted as a concern.

The need for strengthening legal protection, improving transparency and accountability, as well as designing privacy-enhancing technologies, is emphasized as ways to address the issues related to children’s data. Governments can play a role in strengthening legal protections for children, such as requiring parental consent and prohibiting organizations from profiling children through targeted advertising. It is also mentioned that educating parents and children about the risks and benefits of sharing personal data online is crucial. Technologists are encouraged to design products and services that collect and use less personal data from children.

There is a global focus on legislation discussions that will impact child safety. Measures such as the Digital Services Act and Digital Markets Act in the European Union, as well as the UK online safety bill, are mentioned as examples of legislation that will have an impact on child safety.

In the context of the United States, there is a gap in legislation related to assistive education technology (ed tech) in schools. Existing bills mostly focus on access, age verification, policies, and education, rather than addressing the usage of assistive technology.

There is also concern about the challenges faced in passing comprehensive legislation related to children’s data, particularly due to competing interests and a divided political landscape. It is acknowledged that despite the proliferation of data and data-related issues concerning children, passing effective legislation proves difficult.

The dataset analysis also reveals the need to educate legislators about the rights and principles of children. Often, legislators may not be adequately informed about the rights of children and the specific meaning of rights like privacy and freedom of expression in the context of children.

The importance of including children in decision-making processes is emphasized as it makes legislation child-centric and serves the intended purpose well. Inclusion of children in the legislative process ensures that their voices and perspectives are heard and considered.

The analysis also highlights the necessity of considering the needs of children from diverse backgrounds. It is crucial to acknowledge and address the unique challenges and requirements of children from different social, cultural, and economic circumstances.

Furthermore, the inclusion of children as active participants in conversations about their well-being is stressed. This can be done through their participation in surveys, focus groups, workshops, and empowering them to advocate for themselves in the legislative process.

There is a suggestion for children to be represented on company advisory boards, emphasizing the importance of their inclusion and representation in corporate governance.

In conclusion, the discussion delves into various aspects of children’s data protection and legislation, shedding light on key issues and suggestions for addressing them. It emphasizes the significance of understanding international children’s rights principles, implementing appropriate data collection and processing practices, ensuring transparency, accountability, and consent, and designing privacy-enhancing technologies. Additionally, it highlights the importance of including children in decision-making processes, considering their diverse needs, and strengthening legal protection. However, there is recognition of the challenges posed by political division and the difficulties in passing comprehensive legislation.

Njemile Davis-Michael

During the discussion, various topics relating to data governance and the impact of digital technology on protecting children’s rights and promoting their well-being were covered. One significant highlight was the influence of the United States Agency for International Development (USAID) in technological innovation, as well as its efforts in humanitarian relief and international development. With 9,000 colleagues spanning 100 countries, USAID plays a significant role in funding initiatives to improve digital literacy, promote data literacy, enhance cybersecurity, bridge the gender digital divide, and protect children from digital harm.

Digital tools were identified as increasingly important for adults working to protect children. These tools, such as birth registration systems and case management support, help facilitate the protection and integration of children into broader social and cultural norms. However, it was acknowledged that increased digital access can also lead to increased risks, including cyberbullying, harassment, gender-based violence, hate speech, sexual abuse and exploitation, recruitment into trafficking, and radicalization to violence. The negative consequences of these risks were highlighted, such as limited exposure to new ideas, restricted perspectives, and impaired critical thinking skills due to data algorithms.

To address these risks, it was argued that better awareness, advocacy, and training for data privacy protection are crucial. The lack of informed decision-making about data privacy was identified as an issue that transfers power from the data subject to the data collector, with potentially long-lasting and harmful consequences. Recognizing the need for safer digital environments, data governance frameworks were presented as a solution to mitigate the risks of the digital world. These frameworks can create a safer, more inclusive, and more exciting future.

The importance of responsible and ethical computer science education for university students was emphasized. Collaboration between USAID and the Mozilla Foundation aims to provide such education in India and Kenya, with the goal of creating technology with more ethical social impacts. The integration of children’s rights in national data privacy laws was also advocated, highlighting the need for a legal framework that safeguards their privacy and well-being.

Empowering youth advocates for data governance and digital rights was seen as a positive step forward, with projects like Project Omna, founded by Omar, a youth advocate for children’s digital rights, gaining support and recognition. The suggestion to utilize youth networks and platforms to inspire solutions further highlighted the importance of involving young voices in shaping data governance and digital rights agendas.

The tension between the decision-making authority of adults and the understanding of children’s best interests was acknowledged. It was argued that amplifying children’s voices in the digital society and discussing digital and data rights in formal education institutions is necessary to bridge this gap and ensure the protection of children’s rights.

Notably, the need for a children’s Internet Governance Forum (IGF) was highlighted, recognizing children as stakeholders in internet governance. It was agreed that raising awareness and capacity building are essential in bringing about positive changes for children within this sphere.

In conclusion, the discussion shed light on the crucial role of data governance and digital technology in safeguarding children’s rights. It emphasized the importance of responsible technological innovation, data privacy protection, and the inclusion of children’s voices in decision-making processes. By addressing these issues, society can create a safer and more inclusive digital world for children, where their rights are protected, and their well-being is prioritized.

Moderator

The discussion on children’s privacy rights in the digital environment emphasised the importance of protecting children from data exploitation by companies. One argument raised was the need for regulatory and educational strategies to safeguard children’s privacy. Age-appropriate design codes were highlighted as a valuable mechanism for respecting and protecting children’s privacy, considering their age and understanding the link between privacy and other rights. Professor Sonia Livingstone, who was part of the drafting group for general comment number 25, stressed the need for a comprehensive approach that ensures children’s privacy rights are incorporated into the design of digital products and services.

The .Kids initiative was discussed as an example of efforts to promote child safety online. This initiative, which focuses on children’s rights and welfare, enforces specific guidelines based on the Convention on the Rights of the Child. It also provides a platform for reporting abuse and restricted content. Edmon Chung, in his presentation on the .Kids initiative, highlighted the importance of protecting children’s safety online and addressed the issue of companies exploiting children’s data.

USAID’s involvement in digital innovation and international development was also mentioned. The organisation works with colleagues in various countries and supports initiatives related to digital innovation. Their first digital strategy, launched in 2020, aims to promote technological innovation and the development of inclusive and secure digital ecosystems. USAID is committed to protecting children’s data through initiatives such as promoting awareness, aligning teaching methods with EdTech tools, and working on data governance interventions in the public education sector.

The discussion also brought attention to the risks children face in the digital environment, including online violence, exploitation, and lack of informed decision-making regarding data privacy. It was emphasised that digital tools play a significant role in protecting children and aiding in areas such as birth registration, family tracing, case management, and data analysis. However, the risks associated with digital tools must also be addressed.

Civil society organisations were recognised for their crucial role in advocating for child-centered data protection. They engage in advocacy related to law and policy, and their efforts have resulted in updated guidance on children’s privacy in educational settings and the investigation of violations of children’s privacy laws. The importance of involving children in data governance and policy development was highlighted, along with the need for meaningful consultation and education.

The discussion underscored the need for age verification mechanisms and risk assessments to ensure the protection of children online. The development of age verification products that comply with privacy laws was seen as a vital step. Concerns were raised regarding the lack of transparency and oversight in current age assessment methods. It was suggested that products should be designed for difficult and risky scenarios to benefit all users.

Overall, the insights from the discussion highlighted the importance of protecting children’s privacy in the digital environment and called for action to create a safer and more inclusive online space for children.

Session transcript

Moderator:
Finally, scan the Mentimeter QR code, which will be available on the screen shortly, or use the link in the chat box to express their expectations from the session. As a reminder, I would like to request all the speakers and the audience who may ask questions during the Q&A round to please speak clearly and at a reasonable pace. I would also like to request everyone participating to maintain a respectful and inclusive environment in the room and in the chat. For those who wish to ask questions during their Q&A round, please raise your hand. Once I call upon you, you may use the standing microphones available in the room. And while you do that, please state your name and the country you are from before asking the question. Additionally, please make sure that you mute all the other devices when you are speaking so as to avoid any audio disruptions. If you are participating online and have any questions or comments and would like the moderator to read out your question or comment, please type it in the Zoom chat box. When posting, please start and end your sentence with a question mark to indicate that it is a question or use a full stop to clearly indicate that it is a comment. Thank you. Let us now begin the session. Ladies and gentlemen, thank you very much again for joining today’s session. I am Ananya. I am the youth advisor to the USAID Digital Youth Council and I will be the on-site moderator for today’s session. Mariam from Gambia will be the online moderator and Nelly from Georgia will be the rapporteur for this session. Today, we embark on a journey that transcends the boundaries of traditional discourse and delves into the intricate realm of safeguarding children’s digital lives. In this age of boundless technological advancements, we find ourselves standing at a pivotal juncture where the collection and utilization of children’s data have reached unprecedented heights. From the moment there existed. becomes evident, their digital footprints begin to form, shaping their online identities even before they can comprehend the implications. Ultrasound images, baby cameras, social media accounts, search engine inquiries, the vast web of interconnected platforms weaves a tapestry of data silently capturing every heartbeat, every interaction. But amidst this digital tapestry lies a profound challenge, the protection of children’s data and their right to privacy. Children, due to their tender age and limited understanding, may not fully grasp the potential risks, consequences, and safeguards associated with the processing of their personal information. They are often left vulnerable, caught in the crossfire between their innocent exploration of the online world and the complex web of data-collecting institutions. Hence today, we are gathered here to delve deeper into the discourse on children’s online safety, moving beyond the usual topics of cyberbullying and internet addiction. Our focus will be on answering the following questions. How do we ensure that children in different age groups understand, value, and negotiate their digital self and privacy online? What capabilities or vulnerabilities affect their understanding of their digital data and digital rights? What is a good age verification mechanism so that such mechanism does not in itself end up collecting even more personal data? And finally, how can we involve children as active partners in the development of data governance policies and integrate their evolving capabilities, real-life experiences, and perceptions of the digital world to ensure greater intergenerational justice in laws, policies, strategies, and programs? We hope that this workshop will help the attendees unlearn the current trend of universal and often adult treatment of all uses, which fails to respect. children’s evolving capacity, often lumping them into overly broad categories. Attendees will be introduced to the ongoing debates on the digital age of consent. Panelists will also elaborate on children’s perception of their data self and the many types of children’s privacy online. Participants will also be given a flavor of the varying national and international conventions concerning the rights of children regarding their data. As our speakers come from a range of stakeholder groups, they will provide the attendees with a detailed idea on how a multi-stakeholder, intergenerational, child-centered, child-rights-based approach to data governance-related policies and regulations can be created. I invite you all to actively engage in the session, to listen to our esteemed panelists, and to ask questions, contribute your insights, and share perspectives. I would now like to introduce our speakers for today. To begin with, we have Professor Sonia Livingstone, who is a professor at the Department of Media and Communications at the London School of Economics. She has published about 20 books and advised the UN Committee on the Rights of the Child, OECD, ITU, and UNICEF on children’s safety, privacy, and rights in the digital environment. Next, we have Edmund Chung, who serves as the CEO of .Asia on the board of ICANN, Make a Difference, Engage Media, Exco of ISOC Hong Kong, and Secretariat of APR IGF. He has co-founded the Hong Kong Kids International Film Festival and participates extensively on internet governance matters. Next, we have Njimile Davis-Michael, who is a Senior Program Analyst in the Technology Division of USAID, where she helps to drive the agency’s development efforts related to internet affordability, data governance, and protecting children and youth from digital harms. Next, we have Emma Day, who is a human rights lawyer specializing in human rights and technology, and she is also the co-founder of the International Human Rights Council. founder of TechLegality. She has been working on human rights issues for more than 20 years now, and has lived for five years in Africa, and six years in Asia. And last but not the least, we have Fyodor Askeres, who is a technology policy expert. She consults with civil society organizations, including but not limited to, the Carnegie Endowment for International Peace, National Democratic Institute, Committee to Protect Journalists, and Partnerships on AI. I would now like to move to the next segment. I now invite our speakers to take the floor and convey their opening remarks to our audience. I now invite Professor Sonia Livingstone to please take the floor.

Sonia Livingstone:
Thank you very much for that introduction, and it’s wonderful to be part of this panel. So I want to talk about children’s right to privacy in the digital environment. And as with other colleagues here, I’ll take a child rights focus, recognizing holistically the full range of children’s rights in the Convention on the Rights of the Child, and then homing in on Article 16 on the importance of the protection of privacy. So I was privileged to be part of the drafting group for general comment number 25, which is how the Committee on the Rights of the Child specifies how the convention applies in relation to all things digital. And I do urge people to read the whole document. I’ve just here highlighted a few paragraphs about the importance of privacy and the importance of understanding and implementing children’s privacy often through data protection and through privacy by design as part of a recognition of the wider array of children’s rights. So to respect privacy must be proportionate, part of the best interests of the child, not undermine children’s other rights, but ensure their protection. And I really put these paragraphs up to show that we are addressing something complex in the offline world, and even more complex. I fear, in the digital world, where data protection mechanisms are often our main, but not only tool to protect children’s privacy in digital contexts. I’m an academic researcher, a social psychologist, and in my own work I spend a lot of time with children seeking to understand exactly how they understand their rights, their privacy, and we did an exercise as part of some research a couple of years ago that I wanted to introduce the types of privacy and the ways in which children, as well as we, could think about privacy. So, as you can see on the screen, we did a workshop where we asked children their thoughts on sharing different kinds of information with different kinds of sources, with different organisations. What would they share and under what conditions with their school, with trusted institutions like the doctor or a future employer? What would they share with their online peers and contacts? What would they share with companies, and what do they want to keep to themselves? And we used this as an exercise to show that children know quite a lot, they want to know even more, and they don’t think of their privacy only as a matter of their personal, their interpersonal privacy, but it is very important to them that the institutions and the companies also respect their privacy. And if I can summarise what they said in one sentence, the idea that companies would take their data and exploit their privacy, the children’s cry was, it’s none of their business. And the irony that we are dealing with here today is that it is precisely those companies’ business. We can see some similar kinds of statements from children now around the world in the consultation that was conducted to inform the UN Committee on the Rights of the Child at General Comet 25. And as you can see here, children around the world have plenty to say about their privacy and exactly understand it both as a fundamental right in itself and also as important for all their other rights. Privacy mediates safety. Privacy mediates dignity, privacy mediates the right to information and so forth, many more. I think we’re now in the terrain of looking for regulatory strategies as well as educational ones. And I was asked to mention, I think this panel will discuss the idea of age appropriate design codes, particularly as one really proven invaluable mechanism and we will talk further about this, I know. But the idea that children’s privacy should be respected and protected in a way that is appropriate to their age and that understands the link between privacy and children’s other rights, I think this is really important. And we see this regulatory move now happening in a number of different international and national contexts. I’ve spent the last few years working with the Five Rights Foundation as part of running the Digital Futures Commission. And I just wanted to kind of come back to that holistic point here. In the Digital Futures Commission, we ask children to comment and discuss all of their rights in digital contexts and not just as a research project, but as a consultation activity to really understand what children think and want to happen and what to be heard on a matter that affects them and privacy online is absolutely a matter that affects them. And we use this to come up with a proposal for child rights by design, which builds on initiatives for privacy by design, safety by design, security by design, but goes beyond to recognize the holistic nature of children’s rights. And so here we really pulled out 11 principles based on all the articles of the UN Convention on the Rights of the Child and on. And so you can see that privacy is a, is a right to be protected in the design of digital products and services as part of attention to children’s and the age appropriate service and building on consultation, supporting children’s best interest, promoting their safety well being development and agency, and I will stop there, and I look forward to the discussion. Thank you.

Moderator:
Thank you very much, Professor Livingstone that was very very insightful. We will now move to admin. Would you like to take the floor?

Edmon Chung:
Hello. Thank you. Thank you. Thank you for for having me and admin from Asia will be sharing, I guess, building on what Sonia just mentioned, we’ll be sharing a little bit about our work at kids, which actually also kind of trying to operationalize the convention on the rights of the child. But first of all, I just want to give a quick background why dot Asia is involved in this dot Asia ourselves is a obviously operates the dot Asia top level domain so you can have domains such as whatever dot Asia that provides the income source for us and so every dot Asia domain actually contributes to the internet development in Asia. One of the things you know some of the things that we do include youth engagement, and we actually are very proudly that very proud that to the net mission program is the. longest-standing youth internet governance engagement program. And that sort of built our interests or our awareness to supporting children’s and children’s rights online. Back in 2016, we actually launched a little program that looked at the impact of sustainable development goals and the internet. And we recently launched an eco-internet initiative, but I’m not going to talk about that. What I want to highlight is that engaging children on platforms, including domain, top-level domains, is something that I think is important. And one of the things that I would like to share. So on the specific topic of .Kids, actually the .Kids initiative started more than a decade ago in 2012 when the application for .Kids was put in through ICANN for the .Kids top-level domain. Right at that point, actually, there was a engagement with the children’s rights and children’s welfare community about the process itself, but I won’t go into details. What I did want to highlight is that part of the vision of .Kids is actually to engage children to be part of the process in developing policies that affect them and to involve children’s participation and so on. And in fact, in 2013, during the process where we were going through the ICANN process, we actually helped support the first children’s forum that is focused on the ICANN process itself, and that was held in April of 2013. Fast forward 10 years, we were finally able to put .kits into place in late, well actually last year, but the .kits top-level domain actually entered the internet on April 4th of 2022 and was launched late last year in November 29th of 2022. So it is less than a year old, so really not even a toddler for .kits. But let’s focus on the difference between .kits and for example .azure or .com. One of the interesting things is that at the ICANN level, there is no difference. For ICANN, operating a .kits would be exactly the same as operating .com. We disagreed and that’s why we engaged into the decade-long campaign to operate .kits and believe that there are policies that are required above and beyond just a regular registry, just a regular .com or … wherever, because there is not only a set of expectations, there are … it is important for … and here is why we say it’s the kids’ best interest domain. That is the idea behind .kits and let’s look at part of the registry policies. But for .kits ourselves, if you think about it, of course we don’t keep children’s data or data about kids, but does that mean we don’t have to have policies that actually is around the registry or for .kits domains itself? Well, we think no. And building off what … as Professor Livingstone was saying. In fact, we have a set of guiding principles that was developed with the support from the Children’s Rights and Welfare Community and based on the Convention of the Rights of Child. And of course, there’s additional kids-friendly guidelines, there’s kids’ anti-abuse policy, and also kids’ personal data protection policies. And I wanna highlight that the entire guiding principles is actually based on the Convention on the Rights of the Child, and probably not all the articles, but certainly articles that outlines protection and prohibited materials. A kind of way to think about it is probably that for the .kids domain, we do enforce and to ensure that restricted content, and the best way to think about it is really that if you think of a movie, the restricted content or the rated R movies would obviously not be acceptable on .kids domains. But on top of that, we also have specific privacy provisions also built on Article 16, as Sonia mentioned earlier, and some other aspects that is around the Conventions of the Rights of the Child. So we think there’s something that is important and is being built into it, and we’re definitely the first registry that builds policies around Convention on the Rights of the Child, but we are also one of the very few domain registries that would actually actively engage in suspension of domains or processes to deal with restrictions. restricted content. Beyond that, there’s a portal and a platform to report abuse and to alert us on issues. And in fact, I can report that we have already taken action on abusive content and restricted content and so on. But I will like to end with a few items. There are certainly a lot of abuses on the internet. But the abuses that is appropriate for top level domain registries to actually take is a subset of that. There are many other abuses that happen on the internet. And there are different types of DNS abuses and different types of cyber abuses that may or may not be effective for the registry to take care of. And that’s, I guess, part of what we discussed. That’s why we bring it to IGF and these type of forums to discuss, is because there are other stakeholders that need to help support a safer environment online for children. So with that, I guess there are a number of acts that are put in place in recent years. And I think .Kids is a good platform to support the kid safety online bill in the US and on the online safety bill in the UK. We do believe that collaboration is required in terms of security and privacy. And one of the vision, as I mentioned, for .Kids is to engage children in the process. And we hope that we will reach there soon. But it’s still in its toddler phase. So it doesn’t generate enough income for us to bring everyone here. But the vision itself is to put the policies and protection in place and also, into the future, be able to support children’s participation in this internet governance discussion that we have.

Moderator:
Okay. Thank you so much, Edwin. That was very, very inspiring. Let’s now go to Njimile.

Njemile Davis-Michael:
Thank you, Ananya. Wonderful to be here. Thank you so much for joining the session, giving me the opportunity to speak about USAID’s work in this area. So USAID is an independent agency of the United States government where I work with 9,000 colleagues in a hundred countries around the world to provide humanitarian relief and to fund international development. In the technology division where I sit, there are a number of initiatives that we support related to digital innovation from securing last-mile internet connectivity to catalyzing national models of citizen-facing digital government. And we work in close collaboration with our U.S. government colleagues in Washington to inform and provide technical assistance, to support locally-led partnerships, and to create the project ideas and infrastructure needed to sustain the responsible use of digital tools. Although we rely consistently on researching, developing, and sharing best practices, our activity design can be as varied as a specific country and community context in which we are called to action. Indeed, the many interconnected challenges that come with supporting the development of digital societies has challenged our own evolution as an agency. So in early 2020, we launched USAID’s first digital strategy to articulate our internal commitment to technological innovation, as well as for the support of open, inclusive, and secure digital ecosystems in the countries we serve through the responsible use of digital technology. strategy is a five-year plan that is implemented through a number of initiatives and there are some that are particularly relevant to our work with young people. Specifically, we have made commitments to improve digital literacy, to promote data literacy through better awareness, advocacy, and training for data privacy protection and national strategies for data governance, to improve cyber security, to close the gender digital divide and address the disproportionate harm women and girls face online, and to protect children and youth from digital harm. Each of these initiatives is supported by a team of dedicated professionals that allow us to think about how we work at the intersection of children and technology. Digital tools play an increasingly important role for adults working to protect children, for example, by facilitating birth registration, providing rapid family tracing, supporting case management, and by using better, faster analysis of the data collected to inform the effectiveness of these services. And they can also play a role in the development and integration of children themselves into larger social and cultural norms by providing a place to learn, play, share, explore, and test new ideas. Indeed, many children are learning how to use a digital device before they even learn how to walk. However, we also know that increased digital access also means increased risk. And so in the context of protecting children and youth from digital harm, USAID defines digital harm as any activity or behavior that takes place in the digital ecosystem and causes pain, trauma, damage, exploitation, or abuse directly or indirectly in either the digital or physical world, whether financial, physical, emotional, psychological, or sexual. For the estimated one in three Internet users who are children, these include risks that have migrated onto or off of digital platforms that enable bullying, harassment, technology-facilitated gender-based violence, hate speech, sexual abuse and exploitation, recruitment into trafficking, and radicalization to violence. Because digital platforms also generate and share copious amounts of data, our colleagues who’ve done an incredible amount of highly commendable work at UNICEF, for example, around children’s data, as well as my colleagues on today’s panel, will likely agree that there are other perhaps less obvious risks. For example, we’ve observed in recent years that children seem to have given up or into or in, I should say, to uniform consent of their data collection, probably due to their naivete and trust of the platforms in which they’re engaging. But a lack of informed decision-making about data privacy and protection effectively transfers power from the data subject to the data collector, and the consequences of this can be long-lasting. The number of social media likes, views, and shares are based on highly interactive levels of data sharing, affecting children’s emotional and mental health. Data algorithms can be leveraged to profile and manipulate children’s behavior, narrowing exposure to new ideas, limiting perspective, and even stunting critical thinking skills. Data leaks and privacy breaches that are not just harmful on their own but can be orchestrated to cause intentional damage is another risk. And we can counteract these and other challenges by helping practitioners understand the risks to children’s data and to ensure accountability for bad actors. The theoretical physicist Albert Einstein is famously quoted as saying that if he had one hour to solve a problem, he would spend 55 minutes. minutes thinking about the problem and only five minutes on the solution. And the sheer amount of data that we generate and have access to means that our vision of solving the global challenges we face with data is still very much possible, especially as we are realizing unprecedented speeds of data processing that are fueling innovations and generative AI will enable the use of 5G and that we will see in quantum computing. So as we celebrate the 50th birthday of the Internet at this year’s IGF, it’s amazing to think about how much all of us here have been changed by the technological innovations paved by the Internet and in that same spirit of innovation, we’re optimistic at USAID that data governance frameworks can help mitigate the risks we see today and be leveraged to create a safer, more inclusive, and even more exciting world of tomorrow, which is the Internet our children want.

Moderator:
Thank you very much, and Jameela, Emma, would you like to take the floor next? Emma, are you here with us?

Emma Day:
Thank you. Yes. Can you see my screen?

Moderator:
Yes. Please go ahead.

Emma Day:
Great. Thank you. Okay, so I’ve been asked to answer how civil society organizations can tackle the topic of child-centered data protection. I think this is a multi-stakeholder issue, and there are many things civil society organizations can do. As a lawyer, I’m going to have a focus on the more law and policy-focused ideas. So there are three main approaches that I have identified. The first is civil society organizations can engage in advocacy related to law and policy. Second, they can engage themselves in litigation. and requests to regulators, I should say. And third, they can carry out community-based human rights impact assessments themselves. So the first example of advocacy related to law and policy, here the target is policymakers and regulators. As an example of this, I was involved in a project that was led by Professor Sonia Livingstone, who’s also on this panel. And this was part of the UK Digital Futures Commission. And it was a project which involved a team of social scientists and lawyers. And we looked in detail at how the use of ed tech in schools is governed in the UK. And we found it’s not very clear whether the use of ed tech in schools was covered by the UK age appropriate design code or children’s code. So the situation of data protection for children in the education context was very uncertain. We had a couple of meetings with the ICO and the Digital Futures Commission also had a group of high level commissioners they had brought together from government, civil society, the education sector and the private sector. And they held two public meetings about the use of ed tech in UK schools. Subsequently, in May, 2023, the ICO published updated guidance on how the children’s code applies to the use of ed tech in UK schools. I won’t go into the details of that guidance now, but suffice to say this was much needed clarification. And it seemed to be as a result of our advocacy, although this was not specifically stated. The second example is a civil society organisations engaging themselves in litigation and requests to regulators. So some civil society organisations have lawyers as part of their staff, or they can work with lawyers and other experts. So an example of this is an organisation in the US called Fair Play. In 2018, they led… coalition asking the Federal Trade Commission to investigate YouTube for violating the Children’s Online Privacy Protection Act, or COPPA, by collecting personal information from children on the platform without parental consent. And as a result of their complaint, Google and YouTube were required to pay what was then a record $170 million fine in a settlement in 2019 with the Federal Trade Commission. So in response, rather than getting required parental permission before collecting personal information from children on YouTube, Google claimed instead it would comply with COPPA by limiting data collection and eliminating personalized advertising on their Made for Kids platform. So Fairplay wanted to check if YouTube had really eliminated personal advertising on their Made for Kids products, and they ran their own test by buying some personalized ads. Fairplay says that their test proves that ads on Made for Kids videos are in fact still personalized and not contextual, which is not supposed to be possible under COPPA. And Fairplay wrote to the Federal Trade Commission in August 2023 and made a complaint and asked them to investigate and to impose a fine of upwards of tens of billions of dollars. We don’t know the outcome of this yet, that complaint was only put in in August this year. And then the third solution, which I think is a really good one for civil society organizations, which I haven’t really seen done completely in practice yet, is to carry out community-based human rights impact assessments. So often companies themselves carry out human rights impact assessments, but it’s also absolutely something that can be done at a community level. And this involves considering not just data protection, but also children’s broader human rights as well. It’s a multidisciplinary effort, so it involves consulting with the company about the impact of their of their products and services on children’s rights, perhaps working with technical experts to test what’s actually happening with children’s data through apps and platforms, and working with legal experts to assess whether this complies with laws and regulations. And crucially, this should also involve meaningful consultation with children, and I think we’re gonna talk a little bit later about what meaningful consultation with children really looks like. I’m going to leave it there because I think I’m probably at the end of my time, looking forward to discussing further, thank you.

Moderator:
Thank you very much, Amma. And finally, Theodora, would you like to let us know what your opening remarks are?

Theodora Skeadas:
Yes, thank you so much. Hi, everybody. It’s great to be here with you. Let me just pull up my slides. Alrighty. Okay. Mm-hmm. And hold on one second. Let me just grab. Okay. Great. So, alrighty. So, it’s great to be here with all of you, and I’ll be spending a few minutes talking about key international children’s rights principles, standards, and conventions, as well as major issue areas around personal data collection, processing, and profiling, and then some regulation and legislation to be keeping an eye out for. So, I’ll start with standards and conventions and then turn to some principles. So, some of the major relevant standards and conventions that are worth discussing I’ve listed here, which include the UN Convention on the Rights of the Child, a widely ratified international human rights treaty, which enshrines the fundamental rights of all children under age. 18. It includes a number of provisions that are relevant to children’s data protection, such as the right to privacy, the right to the best interests of the child, and the right to freedom of expression. Also, the UN guidelines for the rights of the child as it relates to the digital environment in 2021. These guidelines provide guidance around how to apply the UNCRC or the rights of the child to children’s rights in the digital environment, and they include a number of provisions that are relevant to children’s data protection, like the right to privacy and confidentiality, the right to be informed about the collection and use of data, and the right to have data erased. Then there’s GDPR, or the General Data Protection Regulation. So this is a comprehensive data protection law that applies to all organizations that process data. For those in Europe, although sometimes this has been extended beyond specifically for companies or employers that are international and exist beyond the European area. These include a number of special provisions for children as well. Then COPA, the Children’s Online Privacy Protection Act in the US, is a federal law that protects the privacy of children under age 13 and requires websites and online services to obtain parental consent before collecting or using children’s personal information. So some of the principles that are important to discuss here include data collection, data use, data storage and security, data access, data and erasure, transparency, and accountability. So this means that organizations should only collect data for legitimate purposes and with the consent of parents and guardians. On data use, it’s that organizations should use children’s data in a way that is consistent with their best interest. On data storage and security, organizations should implement appropriate security measures to protect children. On data access and erasure, organizations should give children and their parents or guardians access to children’s personal information. children’s data and the right to have it erased. On transparency and accountability, organizations should be transparent about what they’re doing to make sure that they’re protecting children. Additionally, there’s age-appropriate design, privacy by default, data minimization, and parental control. Products and services should be designed with the best interests of children in mind, and also be appropriate for their age and developmental stage. On privacy by default, products and services should be developed with privacy in mind. On data minimization, products and services should only collect and use the minimum amount of data required. On parental controls, products and services should provide parents with meaningful control over their children’s online activities. Major issues around personal data collection, processing, and profiling that are in discussion today, include consent, so children may not fully understand what it means to consent to the collection and use of their personal data. That’s also true for adults, but it’s especially true for children. Transparency, so organizations may not be transparent about how they collect, use, and share children’s personal data, which can make it difficult for parents to make informed decisions about their children. Data minimization, so organizations often collect more personal data than is necessary for the specific purpose. This excess data can have other purposes like targeted ads profiling. On data security, organizations may not be implementing adequate security measures to protect the personal data of children from unauthorized access, disclosure modification, and destruction, which can put children at risk. Profiling, organizations may use children’s personal data to create profiles, which can be used to target children with advertising and content that might not be in their best interests. Additionally, strengthening legal protection. So there’s an ongoing conversation around how governments can strengthen legal protections for children, such as requiring parental consent and prohibiting organizations from profiling children through targeted advertising. Also raising awareness. There is a huge conversation ongoing now about how parents and children should be educated, but the risks and benefits of sharing personal data online to make sure they’re making informed decisions about what to share and what not to share. Also improve transparency and accountability. Organizations should be transparent about how they collect, use, and share children’s personal data, and they should be accountable for that data. And then last is designing privacy-enhancing technologies. Technologists can design products and services that collect and use less personal data from children, and also that help children and parents manage their privacy online. So next, we’ll look at regulation and legislation. We’ve been seeing a huge amount of regulation and legislation in this space. In the U.S. context, we’ve seen some U.S. federal bills, but because those haven’t passed, we’ve been seeing a transition to state-level bills. So I wanna pull up, there we go. So this is a piece that I wanted to share that talks about bills in the area that we’re seeing in the U.S. So there is here a compilation of 147 bills across the U.S. states. Not all are represented, but a lot of them are, and interestingly, states across the political divide. And you can see here the legislation that’s in discussion includes themes like age verification, more age verification, instruction, parental consent, data privacy, technologies, access issues, more age verification, so that’s clearly a recurring theme, recommendations on the basis. of data, et cetera. And you can see here, there are some categories. So we see law enforcement, parental consent, age verification, privacy, school-based, hardware, filtering, perpetrator, so that looks at safety, algorithmic regulation, and more. And then we can see the methods. So these include third parties, state digital IDs, commercial provider, government, IDs, self-attestation, and then you can see what ages these are targeting. So mostly they’re targeting age 18, but there are a few that look at 13, and sometimes other ages as well. And then the final categories of analysis look at access limited content or services, content types, and status. And I think that is it. Thank you so much.

Moderator:
Thank you very much, Fiora. I have received a request, actually, from the audience. If you could kindly share the link to the website that you were just sharing with us, that would be great. It was a very, very, very good remark. Thank you very much. Okay, so we will now be moving on to the next segment, where I will be directing questions to each of our speakers. We will begin with Professor Sonia Livingstone. While I had a set of questions prepared for you, Professor Livingstone, I think you kind of answered most of those. So let’s pick something from what you have focused on in your opening remarks. You mentioned about age-appropriate design code. So I wanted to know what are your views on this age-appropriate design code for different countries, since in different cultural, national, international, and local contexts, what is appropriate for what age differs? So what would you like to say about that, and how can an age-appropriate design code be the answer in such a context?

Sonia Livingstone:
And I think that’s a great question And I think that’s a great question, and I think others will want to pitch in. I think my starting point is to say that if we’re going to respect the rights of children online. We have to know which user is a child. And the history of the internet so far is a failed attempt to so far is a failed attempt to respect children’s rights without knowing which user is a child. So, we need a mechanism So, we need a mechanism that we can use to deal with the problem. And at the moment, we either have no idea who a user is or we somehow assume, or produces product producers or produces product producers somehow assume that the user is an adult, often in the global north, often male, and rather competent to deal with what they find. So we need a mechanism So we need a mechanism, and age that we can use to deal with the problem. And the extent to the problem. And the extent to which is being taken up in the global north and the global south shows the genuine need to identify who is a child. There are two problems, and one There are two problems, and one you didn’t highlight but it does mean that we need to, in some way, identify the age of every user in order to know which ones are children, because we need to identify the age of children, because we need to identify the age of children. So there’s a mechanism, and So there’s a mechanism, and which others have alluded to, and then, as you rightly say what is appropriate for children of different ages, varies in different cultures. I think I would answer that by returning to the UN Convention on the rights of the child, it addresses the child rights at the level of the universal, the infrastructure of that right so therefore, on the basis of the concept of gender equality and equality of civil rights and liberties to participate, be of children’s rights at the universal level. But there are also many provisions in the convention, and also in general comment 25 about how this can be and should be adjusted and tailored to particular circumstances, not to qualify or undermine children’s rights, but to use mechanisms that are appropriate to different cultures. And I think this will always be contested and probably should be. But at heart, if you read the age appropriate design codes, they focus on the ways in which data itself is used by companies in order to support children’s rights rather than setting a norm for what children’s lives should look like.

Moderator:
Thank you very much, Professor Livingstone. That was a very, very detailed and very nuanced answer. Next, Edmund, since we are on the subject of age, what do you think is a good age verification mechanism which does not in itself lead to the collection of more personal data?

Edmon Chung:
Of course, that is a very difficult question, but I guess a few principles to start with. First of all, privacy is not about keeping data secure and confidential. Privacy, the first question is whether the data should be collected and kept in the first place. In terms of privacy, if it is just an age verification, and whoever verifies it discards or erases or deletes the data after the verification, there should be no privacy concern. But of course, platforms and providers don’t usually do that, and that’s one of the problems. But the principle itself should be just like when you show your ID or whatever, The person takes a look at it, you go in, and that’s it. They don’t take a picture of it and keep a record of it. So that’s privacy to start with. The other thing, then, we need to probably think about whether the age verification is to keep children out or let children in. It’s a big difference in terms of how you would then deal with it. But especially on whether or not data should be kept or should be discarded. Now on the actual verification mechanism, I think, in fact, there is well-developed systems now to do what is called pseudonymous credentials. So basically, the platform or the provider doesn’t have to know the exact data, but can establish digital credentials with digital certificates and cryptographic technologies, techniques such that parents can vouch for the age and complete the verification without disclosing the child’s personal data. I think these are the mechanisms that are appropriate. And more importantly, I guess I go back to the main thing, is that if it is just for age verification, whatever data that was used should be discarded the moment the verification is done. Thank you very much.

Njemile Davis-Michael:
That was very comprehensive. Next, Njimili. How is the USAID thinking about data governance, especially with relation to children’s data governance? Yeah. We spend a lot of time thinking about data governance, and that’s because data really fuels the technology that we use at either. generates data in some way or uses data for its purpose. And technologies have a tendency to reinforce existing conditions. And so we wanna be really intentional about how data is used to that end. Data governance is important for a few basic reasons. One is because data by itself is not intelligent, so it’s not gonna govern itself. And because data multiplies when you divide. So there is so much of it, right? We know that the sheer amount of data, again, that we’re generating needs to be wrangled in some manner if we’re gonna have some control over the tools that we’re using. So data governance framework helps us think to think about what needs to be achieved with the data, who will make decisions about how the data is treated and how governance of the data will be implemented. Writ large, we look at five levels of data governance implementation, and that’s everything from transnational data governance down to individual protections and empowerment. And that’s really the sweet spot for us in thinking about children. It’s about developing awareness and agency, about participation in data ecosystems. Kind of in the middle is thinking about sectoral data governance, where we find that there are highly developed norms around data privacy, data for decision-making, data standardization, that help structure data-driven tools like digital portals for sharing data. And so we are currently working with Mozilla Foundation on a project similar to the one that we heard Emma talking about, where we are working in India and India’s public education sector to think about data governance. interventions there. India has one of the largest, if not the largest, school systems for children in the world. 150 million children are enrolled in about 1.5 million schools across the country. India had one of the largest, the longest, I’m sorry, periods of shutdown during COVID-19, and EdTech stepped into that gap very altruistically, right, to try to close gaps in student education. However, as, again, Emma has pointed out and we have found in our own research, there was some nuances in the ways that these EdTech institutions were thinking about student learning compared to the way schools were. And so, you know, private industry is incentivized by number of users and not necessarily learning outcomes. There needed to be some clarity around the types of standards that EdTech companies are to meet. There’s a reliance on EdTech replacing teachers’ interaction with students and data subjects generally lacking awareness about how their data is used by EdTech and schools to measure student progress and learning. So, we’re currently working with a number of working groups in India to really understand how to bridge this gap and to synchronize the collection of data and data analysis that harmonizes analog tools with digital tools. So, teachers who are taking attendance, how does that correlate to scores on EdTech platforms? And so, we’re, you know, focused right now on the education sector, but we imagine that this is going to have implications for other sectors as well. We’re also working, and I don’t know if I mentioned that, on this partnership. with Mozilla Foundation. We’re also working in partnership with Mozilla to look at responsible and ethical computer science for university students, also in India and in Kenya. And here, we’re hoping to educate the next generation of software developers to think more ethically about the social impacts of the technology they create, including generative AI. And then, going back to the protecting children and youth from digital harm that we’re doing, we are extremely proud to be working alongside and supporting youth advocates through our Digital Youth Council. We have Ananya, who participated in Cohort 1, and Mariam, who was, I believe, in the room a little bit earlier, helping to moderate the chat for today’s session, who are extraordinary examples of the type of talent that we’ve been able to attract and to learn from. In year 2 of the cohort, we received almost 2,700 applications worldwide. And from that number, we selected 12 council members. And we’re anticipating just as fabulous results from them. And so that’s generally how we are thinking about children’s data through our data governance frameworks. I think just riffing off of what I’ve heard today, we can also advocate through data governance for inclusion and enforcement of the rights of children into national data privacy laws, especially as we know, in IGF, lots of countries are thinking about how to develop those privacy laws. We should be advocating for the rights of children to be included. And in civil society, there’s opportunity to explore alternative approaches to data governance. Data cooperatives, which are community-driven, .can help groups think about how to leverage their data for their own benefit. Civil society perhaps has room to explore the concept of data intermediaries where they are a trusted third party that works on behalf of vulnerable groups like children to negotiate when their data is accessed and to also enforce sanctions when data is not used in the way that it was intended to.

Moderator:
Okay, thank you so much. And since Njimri has already approached on the conversation on bringing in the civil society, why don’t we move to Emma Day and ask her the next question. So Emma, how do you think civil society organizations could work with children to promote better data protection for children?

Emma Day:
Thanks so much for the question and yeah, I think Njimri came up with some really good starting points for this conversation already. I think to involve children, it has to really be meaningful and one of the difficulties not just with children, in fact, with consulting with communities in general on these topics of data governance is it’s very complex and it’s hard for people to understand immediately the implications of data processing for their range of rights, particularly projecting into the future and what those future impacts might be. So I think to begin with, to make that consultation meaningful, you have to do a certain amount of education and I think some of the great ways to do this is to involve children in things like data subject access requests where they can be involved in the process of writing to a company and requesting the data that that company is keeping on them so they can see in practice what’s happening with their data and form a view on what they think about that and for children to be involved in. these kinds of community data auditing processes or so there is some auditing of AI community-based processes that have been going on which I don’t think have involved children so far but obviously older children could get involved in these kinds of initiatives and I think involving children in conceptualizing how data intermediaries can work best for children of different ages is really important this is something we talked about a couple of years ago now I was one of the authors of the UNICEF manifesto on data governance for children and we had a few ideas in there about what civil society organizations can do to involve children I haven’t seen a lot of this happen in practice another one of the key things that that I would like to see is for civil society organizations to involve children in in holding companies accountable by auditing their products by doing these kinds of community-based human rights impact assessments and I think we need to think about not just the platforms and the apps but also some of the things like age verification tools like edtech products like health tech products tools that are used in the criminal justice system that are used in the social welfare system you really almost technology products impact almost all areas of children’s lives and we have to remember that all of these are private sector companies even where they’re providing solutions that are that are essentially to promote children’s rights we need to ensure that children are involved in in auditing those products and making sure that they really are having benefit for for children’s rights but I think to do that civil society organizations need to ensure that they involve academics they involve technologists and they involve legal experts to make sure that they they really get it right because these are complex assessments to make.

Moderator:
Thank you very much. Let’s move to Theodora. I know you mentioned about a lot of the existing international standards, conventions and laws regarding children’s rights and their data. What about the regulations and legislations which are underway to address some of these concerns? Are there any particular areas where these regulations could do better or any other suggestions that you might have for any such future conventions?

Theodora Skeadas:
Hi everyone. That’s a really great question.. Thanks Manya. So I’m gonna screen share again just so folks can see the database that I was referencing earlier. I think to me it’s not so much that there are specific technical gaps in what we’re seeing but rather and and of course this is a US focused conversation and it’s important to mention that there is legislation being discussed globally outside of the US as well and that legislation that’s happening elsewhere is inclusive of children’s safety issues. So for example in the European Union transparency related measures like the Digital Services Act and Digital Markets Act will have impacts on child safety and the new UK online safety bill which is underway will also impact child safety and and legislation discussions are happening elsewhere as well but within the US where this data set was collected and where my knowledge is strongest I think that it is pretty comprehensive although it’s it’s interesting to note that one of the questions that I saw in the chat touched on a theme that wasn’t discussed here in this legislation. So specifically the question was whether there was and I’m just looking through the chat again whether there was here we go Oh yeah, legislation related to assistive ed tech in schools. So I observed here that there are four school-based policies and two hardware-based policies, but none of them are focused on assistive ed tech. The ones that are focused on schools look more at like access, age verification, policies and education, and the hardware ones are focused more on filtering and technical access, or rather. So you can see those here, like requiring tablets and smartphone manufacturers to have filters that are enabled at activation and only bypassed or switched off with a password. So you can see that there is quite a range. I think to me, the bigger concern is whether this legislation will pass. We see a really divided political landscape. And even though we’re seeing a proliferation of data and data-related issues around children in legislative spaces, the concern is that there isn’t going to be a legislative majority for this legislation to pass. So it’s not per se that I see specific gaps and more that I have broader concerns about the viability of legislation and the quality of the legislation, because not all of it is equally as high quality. And so I think the increasing fraught political landscape that we find ourselves in is making it harder to pass good legislation, and there are competing interests at play as well. Thank you.

Moderator:
Thank you very much. I would now like to thank all our speakers for sharing their insights with our attendees. At the very same time, I would like to thank our attendees who I see are having a very lively chat in Zoom. Hence, since you have so many questions, why don’t we open the floor for questions from the audience? We would be taking questions from both onsite and online audience. If you’re onsite and if you have a question, you have two stand mics right there. You could kindly go to the microphones and please ask your question by stating your name and the country you’re from, and post that we will be taking questions from the chat.

Audience:
is Jutta Kroll. I’m from the German Digital Opportunities Foundation. They’re heading a project on children’s rights in the digital environment. First of all, let me state that I couldn’t agree more with what Sonja said in her last statement that if we don’t know the age of all users, age verification wouldn’t make sense. We need to know whether people are over a certain age, belong to a certain age group, or under a certain age. My question would be, we need to adhere to the principle of data minimization, so whether any of you have already thought how we can achieve that without creating a huge amount of additional data, and even the Digital Services Act doesn’t allow to collect additional data just to verify the age of a user. So it’s quite a difficult task, and Edwan has already said if we could trust companies when they do the age verification that they delete afterwards the data, but I’m not sure whether we can do so. So that would be my question, and the second point would also go to the last speaker, Theodora, that when you gave us a good overview on the legislation, the question would be how could we ensure that legislation that is underway takes into account from the beginning the rights of children, not like it was done in the GDPR on the very last minute, putting a reference to children’s rights into the legislation. Thank you for listening.

Moderator:
Thank you very much. Why don’t we deal with the first half of the question? Would any of the speakers like to take that? And we would then direct the second question to Federa. Yes, please go ahead.

Edmon Chung:
I’m happy to add to what I already said. I guess in terms of, in those cases, then it’s a pseudonymized data, right? I mean, so instead of collecting the actual data, there is a, it is very possible for a system, like platforms to implement pseudonymized credential systems. And those vouching for a participant’s age could be distributed, right? I mean, could be schools, could be parents, could be your workplace or whatever. But as long as it is a trusted data store that does the verification and then keeps a pseudonymized credential, then the platform should trust that pseudonymized credential. So I think that is the right way to go about it. The other part, as much as I still think it is the right way to ask for it to be deleted, can we trust companies? Probably not, but of course we can have regulation and audits and those kind of things. But for trusted anchors themselves also, whether it’s a school or whether it’s, whatever trusted anchor that the person. actually gives the age verification to, that institution should also delete the raw data and just keep the verification record, verified or not verified. And that’s the right way to do privacy in my mind. Thank you.

Moderator:
I think Professor Livingstone wants to add something. Please go ahead.

Sonia Livingstone:
If I may, yes. Actually, Edmund just said much of what I wanted to say, so I completely agree. And I’ve been part of a European effort, EU consent, which is also seeking to find a trusted third party intermediary that would do the age check, hold the token, and not have it held by the companies. So I think there are ways are being found. Clearly, the context of transparency and accountability and kind of third party oversight that scrutinizes those solutions will really need to be strong. And that also must be trusted. I’d add, I think we should start this process with a risk assessment because not all sites need age checks. Not all content is age inappropriate for children. So one would like to, I would advocate that we begin with the most risky content and with risk assessment so we don’t just roll out age verification excessively. And I’ll end by noting, big tech already age assesses us in various ways. I think the big companies already know the age of their users to a greater or lesser degree of accuracy. And we have no oversight and transparency of that. So I think the efforts being made are trying to write what is already happening and already happening poorly from the point of view of public oversight and children’s rights.

Moderator:
Thank you. Emma.

Emma Day:
I think this is a still a question that everyone’s grappling with really, and there are differing views maybe in different jurisdictions around how well age verification products comply with with privacy laws in different countries. I would really agree with what Sonia said about starting with a risk assessment, I think we need to look at first, what is the problem we’re trying to solve, and then is age verification the best solution, because to start with, if we’re going to process children’s data, it should be necessary and proportionate, and so we have to look at what other solutions there are that are not technical first, that might address the problem we’re trying to address first, rather than looking at just age verification across everything. I think also there’s an issue within, certainly under EU law, pseudo-anonymization is very difficult to say, but it’s also pseudo-anonymized data is still personal data under the GDPR, so it’s not that straightforward within the EU to just use pseudo-anonymized data as an alternative. So I think it’s still very tricky, and at European Union level, this is not something that has been settled yet either.

Moderator:
Okay, and Theodora, any remarks from you?

Theodora Skeadas:
Sure. Yeah, I think this is a really great question. It’s not easy to ensure that legislation takes into account the stated rights of children. I would start with education. I think, frankly, from my experience interacting with legislators, since I participate in the advocacy process, I found that most legislators are just under-informed, and so making sure that they understand what these rights and principles and standards actually are, what does it mean for the right to privacy to be manifest in legislation, or what are the best interests of a child. child? What is the right to freedom of expression? What do we think about the right to be informed when it comes to children? I think most legislators just don’t really know what those things mean. And so educating them, in particular, building coalitions of civil society actors and multi-stakeholder actors can be very effective in educating and influencing legislators around the rights of children. And then as was also mentioned in the chat, I think, Omar just put it in a few minutes ago, I believe including young people in decision-making processes is not just essential, it’s empowering. I think that’s an important part of the process too. Bringing together legislators, so the people who are actually writing legislation and the children themselves is really important so that the legislation process can be child-centric and really center the voices and experiences of the children that we’re trying to serve. And then last, I think it’s important to recognize that this needs to be done in an inclusive way and in a way that engages children from all different kinds of backgrounds so that all different experiences are included as legislation is happening. But again, I think education really is at the core here. Legislators want to hear from us and are excited when we raise our hands. Thank you.

Audience:
Thank you very much. We will now be taking questions from the online audience. May I request the online moderator to kindly read out any questions or comments that we may have received from the online audience? Hi. So we have two questions from the online participants and two comments. Question one is from Omar, who is a 17 year old. He asks, how can child-led initiatives be integrated into data governance, ensuring that children have a voice in shaping policies that directly impact their digital lives? He is the founder and president of Project Omna, which is an upcoming AI-powered mobile app that is focused on children. mental health and child rights and he wants to increase his impact in data governance for children. Second question is from Paul Roberts from the UK and he asked, when it comes to tech companies designing products and services, how common is it for them to be including child rights design in their process and at what stage? Proactive or afterthought for risk minimization? Comment one is also from Omar who said that he is from Bangladesh and is one of the 88 nominees for International Children’s Peace Prize 2023 for Advocacy Works. He is the founder and president of Project Omna and he’s also the youngest and only child panelist of every global digital compact session representing children globally and provided statements on everydata protection and cyber security for children. He suggested answer to the guiding questions that you started the session with is that one, children’s perspective are dynamic and he suggests the use of interactive story-based digital tools to help children grasp the importance of their digital data and rights, adapting these tools to different age levels. Two is that to collaborate with tech companies in order to develop age verification methods that employ user-created avatars or characters safeguarding personal data. Children’s feedback will be instrumental in refining this approach and three, establish child-led digital consoles or advisory groups for direct input into policy decisions. These groups should meet regularly, ensuring real-time feedback from children and aligning policies with their evolving needs and digital experiences. The final comment is from Ying Chu who says that maybe the younger generations know more about privacy protection and how to protect their data than educators or us. After all, the children were born in the internet age and they are internet kids. Many of us are internet immigrants. Oops, sorry, sorry.

Njemile Davis-Michael:
Okay, I’m going to go ahead and start with the first one, and I would love to see your application. One of the things that we try to do there is to raise the voice of youth advocates, not just to the level of international development organizations like USAID, but to also empower them to activate other youth networks. So, we have a platform that we use to encourage youth advocates to do that, and we try to do it in a way that is inclusive, that is awareness-raising, helps to inspire and incentivize solutions that we have not thought of yet. There’s this constant tension between adults who have authority to make decisions, and children who understand what’s best for them, but perhaps don’t have the agency to do such.

Moderator:
Okay, are there any other comments from the panellists? And since we are running short on time, I would otherwise like to move to the next segment. Okay, we see Professor Livingstone has some comments. I would request you to kindly keep it short.

Sonia Livingstone:
Yeah, it’s funny, I’m more familiar with 80 for 30 but I probably have an irritation about social altruism, rightly so. I think the challenge is for those who haven’t yet thought of it of haven’t yet embraced its values And so my answer to Omar and also to Paul Roberts would be to talk more, give more emphasis to child rights impact assessments. I think many companies understand the importance of impact assessment of all kinds and a child rights impact assessment requires and embeds youth participation as part of its process along with gathering evidence and considering the full range of children’s rights. But perhaps it’s more a mechanism in the language of companies and so one that if child rights impact assessment were embedded in their process, perhaps by requirement, I think that would make many improvements.

Moderator:
Thank you, Professor Livingstone. As we enter the final eight minutes of this very, very active and enlightening session, I’m very, very happy to invite our esteemed speakers to kindly share their invaluable recommendations in less than a minute, if possible. The question for all the panelists is how can we involve children as active partners in the development of data protection policies to ensure greater… Before I give the floor to our speakers, I would also like to strongly encourage the audience to seize this opportunity and share the recommendations by scanning the QR code, which is right now displayed on the screen or by accessing the link shared in the chat box. I would now like to welcome Professor Livingstone to kindly share her recommendation once again in less than a minute. Thank you.

Sonia Livingstone:
Well, I’ve mentioned child rights impact assessment and perhaps that is my really key recommendation. I think that what we see over and again in child and youth participation is that children’s and young people’s views are articulate, are significant, and are absolutely valid. The challenge really is also for us… who are adults. Every time we are in a room or a meeting or a process where we see no young people are involved, we must point it out. We must call on those who are organising the events, and that includes ourselves sometimes, to point out the obvious omission and to be ready to do the work to say these are the optimal mechanisms and here is a way to start, because people find it hard but youth participation is absolutely critical in this domain and is of course young people’s right.

Edmon Chung:
Thank you. Edmund? I will be very brief. I think a children’s IGF is called for and that’s the beginning of this wider awareness and I think it’s about building the capacity as well. I mean you can’t just throw children into a focus group for two hours and expect them to come up with a brilliant policy decision, right? So it’s a long-term thing, so it starts with actually the internet governance community and all these things that actually have children as part of a stakeholder group and that I think is probably a good way to go about it.

Njemile Davis-Michael:
Thank you. I agree with everything that I’ve heard and I would add that we need to do a better job discussing digital and data rights in formal education institutions. I think we can do a much better job of that globally, so that there’s a welcome, encouraging environment to hear children how they would like to advance their digital identities in a digital society. They have awareness, they have tools, and they have opportunities to do so in safe ways with mentorship and guidance.

Emma Day:
Emma? I will be very brief. I think a child’s IGF is called for Thank you, some great suggestions so far. I would like to just emphasise that children are not a homogenous group and I think it’s really important to centre the most vulnerable and marginalised children that can be within a country or it can be geographically, particularly considering global reach that a lot of apps and platforms have these days. There’s a particularly great scholar I would recommend reading up on Afsaneh Rigo’s work on design from the margins where she talks about how if products are designed for the edge cases, for the most difficult, most risky scenarios, in the end it’s going to benefit everyone much more. I’m going to share a link to that in the chat, thanks.

Moderator:
Thank you and finally Theodora.

Theodora Skeadas:
Yeah I think that this has been reiterated a few times but it’s worth mentioning it again. Really we need to be centring the voices of children as active participants in conversations about their well-being and so this can be done by including them in surveys, focus groups, workshops, various methods that are children friendly. Like I said I think in the legislative process I think that children should be empowered to advocate for themselves, specifically older children but children from all different backgrounds because this is their well-being at stake. I also think that when it comes to companies I would personally like to see children represented on these advisory boards. That hasn’t traditionally happened and I put a few of the advisory boards in the chat because these are ways to elevate the voices of children directly in conversation with the people making policies for the platforms.

Moderator:
Thank you. Thank you very much ladies and gentlemen. As we come to the end of this enlightening session I would like to express my heartfelt gratitude to our distinguished speakers for their unwavering commitment to sharing their knowledge and expertise and for also making our lives easier as moderators. because I see you have been responding to the comments and questions in the chat box. I would also like to extend my deepest appreciation to the very, very active audience for their extremely energetic engagement and thoughtful participation. Without your presence this session would not have been as meaningful and while we are on the subject of people who have been instrumental in making this session a success, I would like to thank my teammates, the very talented co-organizers of all the four workshops that we have hosted during the UN IGF 2023, Keo from Botswana and Nelly from Georgia. I cannot thank you both for your exemplary commitment, relentless hard work, awe-inspiring creativity and tireless efforts. In the absence of all of which we would not have been able to create the impact we have, I want everyone here in attendance to be aware and appreciative of the countless hours, late nights and personal sacrifices this team has made to keep this ship afloat. It was my good fortune indeed to have had the honor of leading this exceptional team, so thank you once again for making this happen. As we conclude this session, I urge all of us to kindly reflect on the insights we have gained and the recommendations put forth. Let us not let this be just another event or seminar, but rather a catalyst for action. It is up to each of us to take the lessons learned today and apply them in our respective fields, organizations and communities. Together we can create a better world for ourselves and future generations and we are right on time. Arigato gozaimasu, sayonara. Thank you.

Audience

Speech speed

155 words per minute

Speech length

710 words

Speech time

275 secs

Edmon Chung

Speech speed

142 words per minute

Speech length

1963 words

Speech time

828 secs

Emma Day

Speech speed

171 words per minute

Speech length

1779 words

Speech time

623 secs

Moderator

Speech speed

164 words per minute

Speech length

2378 words

Speech time

868 secs

Njemile Davis-Michael

Speech speed

154 words per minute

Speech length

2257 words

Speech time

880 secs

Sonia Livingstone

Speech speed

164 words per minute

Speech length

2124 words

Speech time

776 secs

Theodora Skeadas

Speech speed

162 words per minute

Speech length

2268 words

Speech time

841 secs

Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Mohammad Atif Aleem

The study conducted by Atif in collaboration with the Alexander von Humboldt Institute of Internet Society sheds light on the low internet access and digital inclusion in Southeast Asian countries, particularly in Vietnam. This finding highlights the pressing need to increase internet connectivity and promote digital participation in the region. On a positive note, the study acknowledges the potential of new digital technologies in improving internet access and inclusion.

Furthermore, Atif, a research analyst at a reputable IT consulting firm, emphasizes the importance of addressing digital access for disadvantaged groups in India. The study reveals that only 29% of rural areas in India have internet penetration, indicating a significant digital divide. The Digital India and Skill India initiatives have been implemented to improve access and bridge this gap. However, there remains a concern for the digital inclusion of disadvantaged groups, underscoring the need for innovative approaches.

In order to expand high-speed internet connectivity in remote or inaccessible areas, the study suggests exploring innovative solutions. Examples provided include Google’s Internet Saathi initiative, which aims to provide internet access to rural women through community networks. Additionally, the use of low earth orbit satellites, community radios, and collaborations between major integrators of digital networks and organizations such as MIT Sloan India Lab are highlighted as potential tools to overcome challenges in expanding connectivity.

The study also recognizes the importance of inclusive technology for people with disabilities. Carlos Pereira’s Livox app is highlighted as a prime example of inclusive technology, as it was deemed the best inclusion app by the United Nations. Developed for Clara, Pereira’s daughter with cerebral palsy, the app has since been adopted by individuals with various disabilities.

Collaborations between private entities, governments, and other companies are considered necessary for significant impact on digital activities and inclusion. The study cites examples of Google consulting the government of India and collaborating with Facebook for its Internet Saathi program, further emphasizing the importance of public-private partnerships.

Entrepreneurs are encouraged to explore various modes of partnerships, including seeking guidance from academia and participating in internet governance schools. These collaborations can provide valuable insights and expertise in navigating the digital landscape and promoting digital inclusion.

To support digital inclusion efforts, the study suggests funding sources such as NGOs, such as the Internet Society, as well as participation in hackathons by IT software firms. These initiatives can provide resources and support necessary to address the challenges and barriers to digital inclusion.

In conclusion, the study highlights the low internet access and digital inclusion in Southeast Asian countries, particularly Vietnam, underlining the urgent need for increased connectivity and participation. It explores various approaches such as new digital technologies, inclusive technology for people with disabilities, innovative solutions for remote areas, collaborations between different stakeholders, and funding sources to address these challenges and achieve digital inclusion.

Anna

The discussions focused on the barriers that marginalised youth face in internet governance and the importance of inclusive participation. It was highlighted that these barriers stem from various social, economic, political, and cultural contexts. By identifying and understanding these factors, it becomes possible to develop effective practices and policies that promote digital inclusion.

One of the key points made was the need for concerted efforts to navigate and dissolve these barriers. It was emphasised that a collective approach is necessary to achieve the inclusive participation of marginalised youth in internet governance. Participants stressed the importance of unpacking these factors comprehensively in order to recognise and address the solutions that address the issue as a whole.

Anna expressed concern regarding digital access and participation barriers specific to certain regions. Unfortunately, no further details or supporting facts were provided on this topic, but it indicates that there are unique challenges faced by marginalised youth in different geographical areas.

Another important area of interest was strategies and practices that promote meaningful participation for young people in internet governance. Participants discussed the significance of engaging young people in decision-making processes and ensuring their voices are heard. It was recognised as a positive step towards reducing inequalities and empowering young individuals to actively contribute to internet governance. Unfortunately, no specific strategies or practices were mentioned in the provided information.

The discussions also touched upon the importance of multi-stakeholder cooperation in advancing the inclusion and participation of young people in internet governance. Successful initiatives were highlighted, involving collaborations between the private sector and government to boost participation. It was considered ideal to have stakeholders from multiple sectors working together to address the challenges faced by marginalised youth.

In conclusion, the discussions highlighted the barriers faced by marginalised youth in internet governance, such as social, economic, political, and cultural factors. The importance of mapping these factors and developing comprehensive solutions to promote digital inclusion was emphasised. It was acknowledged that concerted efforts and a multi-stakeholder approach are necessary to enable the inclusive participation of marginalised youth in internet governance. However, more specific strategies and practices need to be explored to achieve meaningful participation and address region-specific challenges.

Pavel Farhan

The analysis includes several speakers discussing different aspects of internet access and inclusion. One of the main voices in this discussion is Pavel, a program officer at the Asian Institute of Technology. Pavel’s work is heavily grounded in technology and academia, and he also represents civil society. He is passionate about promoting equal internet access for minority groups and believes in the importance of youth participation in internet governance. Pavel strives to create opportunities for inclusive internet access and highlights the significance of youth involvement in the multi-stakeholder process.

Another topic highlighted in the analysis is the significant barriers to digital access faced by underprivileged and underrepresented groups in Bangladesh. These barriers include limited internet infrastructure in rural areas, prohibitive costs of internet access for those in low-income communities, lack of digital literacy skills, and the language barrier, as not all online content is available in the primary language of Bangladesh, Bangla. These challenges have a drastic impact on the digital inclusion of vulnerable communities.

However, there are government-led initiatives that aim to address these barriers. The ‘InfoLadies’ programme, for instance, involves women travelling to rural areas to provide internet services, thus improving digital literacy and access. Additionally, the ‘Bcash’ mobile finance initiative has provided opportunities for people without traditional bank accounts to engage in digital transactions, promoting economic inclusion. These initiatives play a crucial role in bridging the digital divide and ensuring that underprivileged individuals have access to the digital world.

Educational institutions are identified as key players in reducing the digital divide and fostering internet governance literacy among youth. While there is a push for more digital literacy in general, there is not enough focus on specific courses teaching internet governance. The analysis stresses the importance of equipping students with the necessary skills to navigate the online world and understand the implications of their digital actions. It suggests that universities should foster leadership and create opportunities for students to advocate for their online rights by hosting forums, clubs, and events related to digital inclusion and governance.

Furthermore, universities contribute to addressing the digital divide by conducting research and gathering data on internet access and usage. This research helps identify gaps in access and usage, allowing policymakers to make informed decisions. Universities can also play a vital role in equipping students with essential digital skills through structured programs.

Overall, the extended analysis showcases the importance of various stakeholders, including individuals like Pavel, government initiatives, and educational institutions, in promoting equal access to the internet and fostering digital literacy. It highlights the need for collaboration and multi-stakeholder involvement to bridge the digital divide and ensure that underprivileged and underrepresented groups have equal opportunities in the digital world.

Jaewon Son

The analysis of the given statements highlights several important points raised by multiple speakers. Jeewon, for example, emphasizes the significance of including citizens and people in policymaking and urban development. This belief is supported by her PhD research at the Karlsruhe Institute of Technology, which focuses on this very topic. Furthermore, Jeewon also recognizes the connection between internet governance and social/environmental issues. Her experience with the Asia Pacific Internet Governance Program has led her to understand that internet governance is not only a technological concept but also relevant to daily life and social and environmental issues.

In addition to these points, Jeewon advocates for the inclusion and increase of youth and citizen participation in internet-related matters. However, there are no specific supporting facts provided for this argument. Nevertheless, it can be inferred that Jeewon believes that the active involvement of young people and citizens in internet governance is essential for reducing inequalities and promoting industry and innovation.

Another notable observation made in the analysis is the emphasis on digital needs and stakeholder participation in advanced internet environments, particularly in South Korea. The supporting facts mentioned include the country’s strong internet connection and high smartphone ownership. However, it is also noted that while basic digital needs are met in South Korea, stakeholder participation is an additional stage that needs to be achieved. This suggests that there is a need for increased engagement and involvement of stakeholders in shaping internet governance policies.

Furthermore, the analysis brings attention to the impact of cultural barriers and gender imbalance on stakeholder participation in internet governance discussions. In the Korea Internet Governance Committee, for instance, Jaewon Son was the only youth and one of the few women representatives, while most participants were male IT professors. This observation highlights the need for addressing cultural and gender disparities to achieve more inclusive and diverse stakeholder involvement.

The analysis also points out a negative factor affecting youth commitment to internet governance – concerns about job security. Many Korean youths, it is mentioned, quit involvement in internet governance due to fears of not securing a job in their field. This suggests that job security is a significant barrier to sustained youth participation in internet governance initiatives.

Lastly, the need for more understanding and opportunities in internet governance for individuals from different backgrounds is highlighted. The speaker expresses the belief that skills learnt in internet governance can be beneficial in various fields outside of IT. It is also argued that it is important for more people to understand what internet governance is about. As such, the speaker supports the idea of promoting internet governance education and creating more accessible opportunities for individuals with diverse majors and backgrounds.

In conclusion, the analysis reveals the importance of including citizens and people in policymaking and urban development, as well as the connection between internet governance and social/environmental issues. It underscores the need for increased youth and citizen participation in internet-related matters and emphasizes the significance of meeting both digital needs and stakeholder involvement in advanced internet environments. The impact of cultural barriers, gender imbalance, and job security concerns on stakeholder participation is also highlighted. Furthermore, the analysis brings attention to the importance of more understanding and opportunities in internet governance for individuals from different majors and backgrounds. Overall, the insights gained from the analysis shed light on various aspects of internet governance and its implications for inclusive and sustainable development.

Audience

Throughout the conversation, there was repeated emphasis on the act of saying goodbye, with multiple individuals expressing their intention to leave. This repetition not only served as a common theme, but also underscored the significance of this action. The frequent utterance of “bye” could suggest a desire for closure or a need to conclude the discussion. It could also indicate a sense of politeness and respect among the participants, as they take the time to bid farewell before departing.

Furthermore, the repetition of “bye” might indicate a strong emotional connection among the conversationalists, as they repeatedly express their desire to part ways. It could be seen as a way of acknowledging the shared experience and expressing gratitude for the interaction. This repetition in saying goodbye could serve as a gesture of goodwill, reinforcing the positive nature of the discussion.

One could also interpret the repeated “bye” as a form of social ritual or convention. In many cultures, it is customary to exchange pleasantries and bid farewell before leaving a conversation or gathering. By adhering to this cultural norm, the speakers demonstrate their adherence to social etiquette and appropriate behavior.

In conclusion, the repeated use of “bye” during the conversation serves as a common and notable theme. It signifies the desire for closure, politeness, emotional connection, and adherence to social conventions. This emphasis on saying goodbye reinforces the cordial and respectful nature of the interaction, underscoring the importance placed on proper communication and social etiquette.

Tatiana Houndjo

Tatiana Houndjo is an IT professional from the Benin Republic in West Africa. She works as an IT system and infrastructure engineer in a private company with branches in Ivory Coast, Niger, and Togo. Her role involves helping businesses and governments implement digital technologies as part of their processes. Tatiana provides support and guidance in the adoption of digital tools and technologies, ensuring their efficient integration into existing systems, and assisting in the resolution of technical issues.

In addition to her work in the IT field, Tatiana is actively involved in the internet governance ecosystem. She was selected for the Women’s DNS Academy Fellowship in 2018, which marked the beginning of her journey in this domain. Since then, she has led various projects and programs to promote women’s participation in internet governance. Her efforts were recognized, and she was elected as the vice chair for a two-year term.

Tatiana firmly advocates for the importance of digital tools and technologies in today’s world. She believes that embracing these tools and technologies is essential for businesses and governments to stay competitive and drive innovation. Her work focuses on assisting organizations in implementing these tools and technologies to enhance productivity, efficiency, and overall performance.

However, Tatiana also highlights the need to consider the hierarchy of needs for young people. She acknowledges that many young individuals struggle with basic needs and are unable to actively participate in internet governance discussions. It is challenging for them to engage when their basic needs, such as access to food and shelter, are not met. Therefore, initiatives must address these fundamental needs before expecting their active participation.

Furthermore, Tatiana stresses the need for meaningful participation of young people in the internet governance ecosystem. She believes that their insights and perspectives are valuable and should be considered in decision-making processes. She advocates for partnerships between stakeholders to create inclusive environments that empower young people to contribute and have their voices heard.

An important challenge highlighted by Tatiana is the inequality in internet usage. There is a clear divide between those who have access to the internet and information and those who do not. Additionally, there is a discrepancy in the efficient use of the internet. Bridging this divide is crucial to achieve SDG 9 (Industry, Innovation, and Infrastructure) and SDG 10 (Reduced Inequalities). Efforts should be made to ensure equitable access to the benefits of the internet.

Lastly, Tatiana raises concerns about the lack of meaningful data to monitor and evaluate actions in the internet governance ecosystem. She emphasizes the need for young people, private companies, and governments to collect relevant data that can provide insights into internet usage and its impact. The Internet Society’s initiative, Internet.Beijing, is an example of a project aimed at monitoring internet usage. Such initiatives are essential for informed decision-making and evidence-based actions.

In conclusion, Tatiana Houndjo, an IT professional and an active participant in the internet governance ecosystem, advocates for the importance of digital tools and technologies. She supports businesses and governments in implementing these tools and technologies. However, she also recognizes the need to address the hierarchy of needs for young people and ensure their meaningful participation in internet governance discussions. Additionally, she highlights the inequality in internet usage and the lack of meaningful data to monitor and evaluate actions. By addressing these challenges, stakeholders can work towards achieving SDG 9 and SDG 10, promoting industry, innovation, infrastructure, and reduced inequalities.

Rashad Sanusi

Rashad Sanusi, a technical support at Digital Grassroots, is taking the initiative to commence a discussion centered around the crucial topics of digital inclusion and Internet Governance. This discussion will involve four speakers who will share their personal experiences and insights on the issue of digital inclusion within their respective communities. Through this, Rashad aims to shed light on the barriers to internet access and explore potential solutions in order to promote widespread inclusivity.

Rashad’s emphasis on understanding the barriers to internet access highlights the need for a comprehensive understanding of these challenges and finding ways to overcome them. By delving into the root causes of limited internet access, Rashad aims to generate discussion and brainstorm practical strategies that can empower individuals and communities to navigate and overcome these hurdles effectively.

Moreover, Rashad’s goal is to foster an interactive and inclusive environment during the discussion. This creates an atmosphere where participants feel encouraged to contribute and exchange ideas freely. By promoting dialogue and collaboration, Rashad seeks to cultivate an atmosphere that is conducive to exploring innovative approaches to digital inclusion.

Rashad’s advocacy for inclusivity in Internet Governance signifies the importance of ensuring that everyone’s voice is heard, especially those at the grassroots level. He believes that by comprehensively understanding the challenges faced by individuals in these communities, policies and initiatives can be developed that align with their needs. Rashad contends that through inclusivity, the decision-making process will be more representative and effective in addressing the collective needs of all stakeholders involved.

In conclusion, Rashad Sanusi’s discussion on digital inclusion and Internet Governance aims to tackle the barriers to internet access and promote inclusivity. By bringing together speakers to share their experiences and perspectives, Rashad hopes to foster an interactive and inclusive environment that facilitates collaboration and generates innovative solutions. Through his advocacy for inclusivity in Internet Governance, Rashad emphasizes the need to consider the voices of those at the grassroots level, ensuring their needs are prioritized in decision-making processes.

Session transcript

Rashad Sanusi:
you you you you you you you you you you you you you you you you you Okay, can you hear me online? Okay, hi everyone. Okay, good morning, good afternoon. Depending on where you are, I am Rashad Sanusi, technical support at Digital Grassroots, and I am thrilled to welcome you all to this session on digital inclusion. So for this session, I will share my screen before we start. Our session will follow this outline. Let me share my screen. Okay, I want to share my screen. Sorry for that, I just want to share my screen. It’s okay now, thank you. We can continue. So I am Rashad Sanusi, and I’m super happy to have you all for this session, for this session on digital inclusion, and this session will follow this outline. First we have the welcome and introduction, and after we have an introduction and a nice breakout before the panel discussion, where we have four amazing speakers. They will share their experience about digital inclusion in their own community, and after that we will go for Q&A and participant insights before the closing remarks. So as I was saying, I’m super happy to have you all today for this session, and I am here with my colleague Anna, and we are happy to moderate this session. So to start, today we embark on the journey to explore digital inclusion and how this intersects with our participation in Internet Governance. So this session is about to know how we are faced, how our community are facing the barrier to access to Internet, and also what are the challenges we are facing in our community about Internet Governance. So by understanding this challenge, we will make sure that we know how we can be more engaged in this space, so we can have our voice heard and also know the challenge we are facing at the grassroots level. So our aim today is to create an interactive and inclusive environment where everyone can be invited to share their own view, and how we can break this barrier to help everybody to be engaged in Internet Governance. So I will invite Anna for the

Anna:
next part of the session. Thank you. Thank you. Hi everyone, thank you so much for joining us. I’ve been having some issues with my audio, so please let me know if you can hear me well. Yes, we can hear you. Amazing. Yeah, thank you so much to all of you for joining us. And as Rashad mentioned, navigating through intersectional access and participation in Internet Governance, we recognize a vast area of barriers, notably impacting marginalized youth. And this challenge is streaming from different factors, social, economic, political, cultural, and other contexts present distinct obstacles in different environments. And we believe that unpacking these factors is vital in enabling us to recognize and map solutions that fully grasp the issue and also leverage our collective insight toward effective practices and policies. And to start the session on the note of a collaborative exploration, we invite you to share words or phrases that come to your mind when you think about the barriers faced by the marginalized youth. Through the Menti, I’m going to screen share. Rashad, would you mind to stop screen sharing for a moment so I can screen share? Thank you. So we invite you to join us in Menti and share some of your thoughts about this issue before we proceed to the panel. I believe you should be able to see my screen. So please use the code that you can see 1525 4103 and let us know your thoughts. Thank you so much for for contributing your ideas. I can see that quite, quite an array of issues. And yeah, I think that this work cloud reflects the collective acknowledgement of these barriers. And also we can see how diverse they are. Something that I was mentioning earlier that the intersection of different factors and contexts that come into arena when we talk about inclusive participation and access. And this is something that we hope to discuss today during the panel discussion with our amazing speakers and to see how we can leverage this knowledge. Keeping in mind is this visualization and this map that we have on the screen and how we can navigate these issues to promote the meaningful participation of young people in internet governance across different contexts. And I will now give the word to Rasha to present our guest speakers that will dig deeper into this conversation and guide us through this discussion.

Rashad Sanusi:
Thank you, Anna. We continue our session, and the next part is about the panel, and we have four amazing speakers, and I will let them introduce themselves, but before I will introduce themselves shortly, and also let them to say something before I will give them the floor. So, we have Muhammad Atif Alin. He’s from India, and he is currently engaged as a research analyst in TCS, and also he has a relevant background in research, consultancy, information technology, and sustainability, and internet governance. Also, we have Jeroen Schoen. She’s a doctoral researcher at the Kaushik Institute of Technology with a passion for multi-stakeholder model in internet governance. She’s also an ICANN fellow, an APNIC fellow, and also a PGA fellow. We also have Tatiana Fungio from Benin, an IT professional who works to protect, she advocates for women’s rights, and also Tatiana serves as a digital expert at the AU, AU Youth Corporation Hub, monitoring and advising development projects in IT. We also have Pavel. He works as a program officer at the Internet Education and Research Lab at AIT in Thailand. He has been strongly involved in internet governance since 2019. So, I will let them to introduce themselves better, so I will give the floor to Mohammed after Jeroen can continue, and Tatiana, and after Pavel. So, Atif, over to you. Okay, so thank you so much, Rashid, and I’m really excited to be here on this panel to speak on digital access and inclusion. So, is this just an introductory remarks that we have to give, or is there any topic you have in mind that we need to? Yes, just introduction, and I will guide you to the session, to the panel after. So.

Mohammad Atif Aleem:
Okay, because you already gave such a nice introduction of everyone.

Rashad Sanusi:
Yeah, I want you to talk more about you before we continue.

Mohammad Atif Aleem:
So. Okay, yeah. So, yeah, my name is Atif, and as Rashid introduced me, I have been working as a research analyst for Tata Consultancy Services, which is one of the major IT consulting firm based in India, and it has offices across the world. So, as in my current engagement, I am based out of Sweden. I’m working in the Stockholm office of the TCS, where I research on the latest technologies that have been evolving in the banking, in the retail, or in the innovative digital sector, and how it can help businesses to bridge the gap in bringing modern technologies to the public and the forum. And in my previous experience of internet governance, I have been working on various issues like privacy, digital divide, access, inclusion, and I also collaborated with Alexander Wohn Humboldt Institute of Internet Society recently in studying about the Vietnamese digital inclusion sector and the state of farmers working in Vietnam. So, we did a holistic study on Southeast Asia countries where digital access has been minimum and how to increase that, how new digital technologies could act as lever is something we pondered about. And I would like to share those insights as well when the discussions would go on during the deliberations of this session. So, I’m excited for this session as it would not only bring about how these technologies could help all of us, which of course have been the discussion of IGF as always, but from a gender lens, from a youth lens, from a holistic inclusion lens, how it could manifest into something which can be a purpose-driven approach to help all the stakeholders that are there in the multi-stakeholder approach that is being given in the internet sector’s domain is something I’m really excited to talk about. So, look forward to this interactive session. Thank you.

Rashad Sanusi:
Okay, thank you so much, Atif. I will let Jeewon to tell more about herself. Thank you, Jeewon.

Jaewon Son:
Hi, I’m Jeewon from Korea, and currently I’m based in Germany. I’m doing my PhD at Karlsruhe Institute of Technology. Before that, my master was more focusing on how do people assess in basic needs, such as internet and water in developing countries. And now I’m more focusing on how do we engage more citizens and people when we are making such policies and development for the urban settings. So, I think my first internet governance experience was when I joined APICA, Asia Pacific Internet Governance Program in Korea. And I think during that time, I learned how actually my work and research can be also related to internet governance, because I think in Korea, as I use internet governance was not a familiar topic that everyone knows. And I think it was great opportunity for me to learn about it and see how internet issues are not only like technological concept itself, but also linked to our daily life and social and environmental concepts. So, yeah, I’m looking forward to talk with all the other speakers and see how can we also include more youth and increase more citizens in internet. Thank you.

Rashad Sanusi:
Okay, thank you so much, Jiwon. Tatiana, you have the floor.

Tatiana Houndjo:
Hi, everyone. Can you just confirm that you can hear me clearly?

Rashad Sanusi:
Yes, we can.

Tatiana Houndjo:
Thank you, amazing. Hi, everyone again. My name is Tatiana Wunjo. I’m from Benin Republic, which is a West African country. So, creating from Benin, if you come to Benin every day, every day in West Africa, make sure you put Benin in your list of country to visit. So, as Harsha said before, I’m an IT professional. I work as an IT system and infrastructure engineer in a private company here in Benin, but also with branches in Ivory Coast, in Niger, and also in Togo. Basically, my everyday work focus on how to help businesses and also government to implement digitalization, digital tools, digital technologies as part of their processes. And as part of this work, I’m happy to have worked on various projects, including public services and so on in Benin. But besides my, I would say my cap as a professional, I’m also engaged in the internet governance ecosystem. This journey started in 2018, when I got selected for a fellowship, which is a Women’s DNS Academy Fellowship in Benin. Because of this, it was like a five days training. And after that, I got engaged with Internet Society. And since then, I’m happy to have joined different projects, different programs. I also got, became the program lead of the Women in Internet program. I’m happy to talk about that later as part of the discussion. And I also got elected to become the vice chair for a mandate of two years that finished few months ago. So thank you, everyone. I’m happy to join you for this discussion. And yeah, looking forward to it. To express more.

Rashad Sanusi:
Okay, thank you so much. Tatiana, happy to have you here. I will let Pavel now to introduce himself.

Pavel Farhan:
Hi, good afternoon to all. This is Pavel for the record. And thank you, Rochelle, and thank you, Hannah, for giving me this opportunity, of course, to be a part of such an amazing cohort of members who will be talking about a very important session today. As Rochelle mentioned before, I’m actually based in Thailand. I work as a program officer for the Internet Education and Research Lab at a university here called the Asian Institute of Technology, AIT for short. I actually have a very technical and academic background, but at times I also do wear the hat of civil society as well. And therefore, I have been strongly involved in the Internet Governance Academy, I would say since APNIC 48 in 2019. As Rochelle mentioned, at that time I was a conference fellow. It was also the first time I met J1. So, you know, fond memories. And since then, you could say I didn’t look back and I’ve been part of several other exemplary fellowships as well, like ICANN and APGA back in 2021. And even INSEC, the Indian School of Internet Governance back in 2021, I believe. And I’ve also been an individual member of the APRALO, which is the Asia Pacific regional at large. And I’m quite eager, I would say, to make valuable contributions to the Internet ecosystem in the Asia Pacific region. And my passion for ICT and ICT for development is what drives me into striving for equal access to the Internet for minority groups. And as a result, I actively promote inclusive Internet and emphasize the importance of youth participation in the multi-stakeholder process. I’m glad that I got to be a part of Digital Grassroots ambassadorship program back in, I believe, 2021 as well for Cohort 5. And that’s how I got involved with Digital Grassroots. So thank you so much to them. And thank you for having me today.

Rashad Sanusi:
Thank you so much, Pavel, as well. So, Anna, over to you.

Anna:
Yeah, thank you, Rachad. And thank you to everyone for presenting themselves. Yeah, I believe that we have a very unique platform with so many experiences and backgrounds coming together. So I’m really excited about our discussion. I would like to start with a more general space. And maybe you can share how do barriers to digital access and participation in Internet governance manifest in your own regions and contexts, particularly for disadvantaged and underrepresented groups. And whether you’ve seen a strategy or a practice that has proven or you’ve seen to be successful in increasing meaningful participation in young people in Internet governance. I think we can maintain this order, if no one minds.

Mohammad Atif Aleem:
So is it back to me or?

Anna:
Oh, yeah.

Mohammad Atif Aleem:
OK, so then I would go on the first one. Yeah, you raised a very pertinent concern when you mentioned about the digital access and participation for especially the disadvantaged and underrepresented or the minority groups in our regions. So especially speaking about my region, when it comes to meaningful and affordable access, it is still a very big challenge with millions still unconnected, especially from the marginalized communities in different countries of Asia, be it India, Vietnam or any Bangladesh, Pakistan. So there is an urgent need for multi-stakeholder dialogue to focus on providing infrastructure and access to all of them and to further enable the use of emerging technologies that have become so famous as of now for the socioeconomic development as well. One of the studies to just to quote statistics, there was a study by MIT Sloan. And as per that study, internet penetration in rural India stands at roughly 29%, which means that over 700 million citizens are still living in the digital darkness. So we understand that universal and meaningful access deserves further consideration and it is not just limited to connectivity and infrastructure. It encompasses other aspects like digital literacy or general access to information, which I could see on the screen when participants typed in the Menti quiz run by the moderator here. So it is important to adopt ways to measure access and identify current methods to empirically measure, track, assess and evaluate the benefits from increasing access inclusivity. And many companies, many private firms have seen that when they do that, they have seen a purpose-led driven growth in their revenues as well. So we will talk about that part later, but here I can say that with rapid development of emerging technologies, these technologies should provide an enabling platform for everyone to participate, to raise their voices and to partake in the benefits as well. When it comes to India, there have been several initiatives from the government of India, like Digital India, Skill India, which tries to abridge the access and technology divide among the masses. And likewise, there have been initiatives from private firms as well. For example, there was one initiative, which was a big hit, was Google’s Internet Saathi Initiative, which empowered female ambassadors to train and educate women in more than three lakh villages of India on the benefits of internet in their day-to-day lives. So that was one good initiative from Google, which tried to abridge the connectivity gap by building a chain of women entrepreneurs and women farmers to propagate the knowledge of internet among other community members as well. So there have been community initiatives with the help of private firms, governments and other stakeholders in the multi-stakeholder group that we can see in Asia-Pacific region. So these were some of the examples that I wanted to share. Also, it’s important to understand that in an increasingly interconnected society, lack of access to internet can tremendously impact day-to-day activities. And in the lure of making it more digitally, we might make some actors in the long run, we can make them aloof of the technical fruits that it can provide. For example, some organizations, I was just reading one report yesterday, like some government banks, they withdrew the physical provision of services to push for web-based services, justifying closure of offices in small communities. I mean, I’m sure some of us have seen in their country, I mean, I’m sure some of us have seen in their countries also that many banks are closing their physical offices just to push for web-based services. So such decisions also affect the daily operations and lives of communities. So there is a need to identify innovative approaches to connect the population in remote and geographically inaccessible areas. It’s not just about withdrawing physical offices and pushing for web-based services, because sometimes it hinders the overall success of giving out the digital services as well. So when it comes to empirical parameters, which should be considered here, the first would be the technical, which is the distance and remoteness of the areas, and other would be adoption challenges as well. There could be language barriers, there could be, beat a disability. The literacy rate differs among people of diverse age groups for for young people say 18 to 30 it is easy easy to you know court get hold of those web-based informative technologies that have but for a person who is above 60 years of age or above 70 years of age it is hard to you know get accustomed to those services which are not as new for him or which are not as exciting so we have to think of ways in which we can include those and those genders as well. So in many areas with the no Internet or Internet a very bleak Internet connect connectivity community radios used to serve as a medium of communication so that can be one area where we can think of how to you know bring community to the radios in an innovative manner to bring along other other people as well and not one not only our age but include them and scale them and educate them about how they can contribute to the digital for and become the digital ambassadors for their communities. So there have been technologies like low earth orbit satellites which are tools for cost effective Internet access to remote and inaccessible areas. So such technologies also have the communication divide. Then I mentioned about a study of mighty Sloan there was one. Industry leading integrator of digital networks called still light technologies which collaborated with the mighty Sloan India lab for helping developing a business model for for profit initiative with the goal of expanding high-speed Internet connectivity in more than 20 villages of India. And its target is to you know do this. Across 3 lack villages by 2000 and 24. So these are some innovative approaches that have been going on in the rural areas. But along with these innovative approaches what we as you can do that is also a very important question has to bring along society which is just and okay is about also that there was one very motivational story that I came across in 2019. And it made me also think up on the lines of Internet governance which I would like to share with you all. So there was this man called Carlos Pereira who was driven by the passion to empower people by enabling them to have a voice. He did something innovative through a mobile app. So he was a computer scientist and his 10-year-old daughter Clara could not walk or talk because she was born with cerebral palsy which is a group of disorder that affect a person’s ability to move and maintain balance. So basically to give his daughter a voice. But what he did was he quit his job as a computer scientist and developed an app to help her communicate. That app was called Livox. The app’s algorithms could interpret motor, cognitive and visual disorders and it used machine learning algorithms to predict and understand what the person would want or need. So that Livox app could be used by people living with a range of disabilities including be it cerebral palsy or down syndrome or multiple sclerosis or even a stroke. So for Clara the app had given her a voice that her daughter, his daughter when her dad asked her what she wanted for breakfast the app recognized his voice and gave Clara the options on the screen allowing to select what she wanted. So that app also gives disabled children a more inclusive education. For example if it is used at a school the software can hear a teacher’s question and provide appropriate multiple choice answers for the students to select. And that app was named the best app by United Nations best inclusion app in the world. So if you can Google more about it you will see that how multiple software companies adopted the idea behind the app and went on to bring on softwares which would be more inclusive. So that is one example where an individual through his mind could change the holistic viewpoint of the society and made people understand that how gender divide or how people of other with disabilities could be brought into the fore of digital inclusion. And in my panelists as well I see very erudite computer engineers. I’m sure across the board and these participants there would be people with multiple talents and skills who can you know think of innovative ideas doing for inclusion. So that is something I wanted to highlight and I would be happy to hear the insights from other panelists as well.

Jaewon Son:
Thank you. Yeah I agree with many of your points. So yeah when I was thinking of what to say I was thinking of this psychological theory about this mass loss hierarchy of needs. I think many people knows it that like facing discomforts like food water then the psychological need and self-fulfillment something like that. I thought like yeah of course we need to try to assess to have our basic needs. But then after that we need our digital inclusion and the right to participate as a stakeholder. And at the top of the hierarchy maybe we can think of like some ecological impact of the Internet or sustainability as a long term. And yeah I think in Korea while we have like strongest like Internet connection and many people are owning a smartphone and so on. So I think we fulfilled the first basic needs. But however the second stage of having like stakeholder participation I think is another stage that we have to achieve because yeah I believe since Internet the decision on making like how to do about this Internet are influencing in so many factors and we really need to like hear from other people as well. But from my experience even in the like such high tech country when it comes to the discussion about Internet I think there are still very few stakeholders who are having a say. So for example in when I was joining this Korea Internet Governance Committee for managing the Internet Governance Program and so on after finishing the APICA I was the only youth who were there and also yeah like beside me there were only like two or three people who were like female. So mostly was the male IT professors who knows a lot and I don’t know if it’s only like Korea or Asia but the youth were considered as someone who doesn’t know much who should learn from the others so that even though I might have a saying they were not really like taking it seriously and say like I will tell you you know. And so when the environment was set like this I think even though I was trying to encourage some other youth who has finished this program together not many people were willing to join as they knew how will be the discussion environment like and they think when they cannot be heard anyway what’s the point of like going there and have a stay. So I think there’s also some cultural barriers that make us having a hard time having a saying in this participation but also I think there some of the youth were having a hard time as they are really stressed about getting a job in the future after they graduate and when they are not like directly met majoring in IT they were afraid even if what if I like spend so much time in internet governance and not being able to get a job how will I be managed to like you know look for another job if my only activities has been in internet governance and so on. So many of the Koreans who have finished this APICA program even though some of them got award and everything in the end they suddenly like quitted all the internet governance program as they like so like since I do not have much expertise in coding or so maybe still I cannot get a job in this so I should like switch to something else something like that. So I think it would have been really nice if we could have shared like in a soft skill or in like some indirect way how can like involving in internet governance actually can also benefit them because for me even though I’m like not like majoring in IT or so I think many of the like skills that I’ve learned throughout the internet governance really benefited me in research and many perspective. For example I think by learning about the importance of multi-stakeholders and like paternal approach and so on I think I was also been able to or see like how can I involve in into my research doing more surveys to the public and try to do some stakeholder interview to include them in the decision-making process and I think it’s not only about like computer science or so on so yeah I wonder why it was either A or B and yeah also I think when I think before COVID we were usually like expected to be at this forum in person and so on but I think many people had a hard time like yeah asking their supervisors or professor if they can have a leave for such an event where not many Koreans were understanding what is even internet governance about so I think to continue their journey I think it’s really beneficial if more we’ll be able to know what actually internet governance is and I think we have like quite a long way to go about many people understanding about it then also giving them one more opportunity in participating regardless of major or background then then we will get to think about some of these environmental impact or sustainability yeah I will stop now

Tatiana Houndjo:
you then I want to say something you can yes I can do this thank you Joan and Mohammed for for already putting so many things on the table so far I think talking about the inclusion and participation of young people in the internet governance system we we need to think about a way to figure it’s many initiatives to get us because when you talk about inclusion of youth and also the engagement in this kind of topic in this kind of discussion the question is do they have the access do they have the information about this today are they using the internet J1 I’m so happy that you talk about the needs of young people to be able to you know surpass the the way of saying okay I don’t have a job I don’t know how to eat I don’t know how to we have something because I cannot be engaging in discussion today we are here for like one hour one hour and have discussion but the young people who doesn’t know what to eat today at lunch or what to eat at dinner I’m sure he won’t be interested in coming here on this table to talk about something like that that’s one point we we need as internet governance ecosystem to partner with initiated that I’m making sure young people are able to sustain themselves that’s one points another point is the meaningful assets what you realize is that every day we have usually the same people coming this every year to talk about the same thing the question is how do we make sure that this information these knowledge this capacity these skills are spreads outside of the general I would say the you need to get to work that we have talked about how do you think organizations and part of the society organization how are we making sure that people do not have yet. But of these discussion how do you make sure that people who are leaving the disability at the impact of the discussion. I would be sure that you know people as seeking on the decision-making table how we make sure but of the decision-making processes and at the end of the society we had this idea was that it I think in 2018 or 2019. The name of the whole point is 5 8 down it’s in English it is. We made it into an 8’s and the basic idea behind this program of which it is to make sure. You mean we mean in the Internet governance ecosystem. Seeking to continue to she was a show in the system because we we get to the point that you have that that is only. Me that to see them come to the idea for me come to the. We’ve been doing this for all of you go to me for me again this can be mean or so even 20 visit when you have only. Many people come in here. I mean that just for the 20 also back to the 20 organizing the 20 buses. So we we use this program and it has been on going until 2022. And we’re happy to see now that we may not know more interested about this discussion about this topic and that’s one thing we need to think about and also as part of reaching the digital divide because I’m sure that’s in countries in that country as well as in the end it is just device between people who have access to the information people who do not have access to the information people who have the skills to be online to be interacting to be using Internet efficiently in people who do not have the skills so that isn’t really in this 20 that between both of these people so we don’t just look at which is a new law and it is a platform who has these capability to be done with the offline so even if you’ve got some to look at the way we live on the Internet access you can see access to those policies into those you know the formation of the 2 of the people from all that he chose on the in the country’s. Unfortunately, we just put it didn’t go for them because there was a lot of support. If you want to do something as part of the money as part of the making sure it’s that’s of the education process we need the support of the government’s that’s one thing we also need the support of the private companies of the Internet service provide that so I think talking about digital inclusion of young people in Internet ecosystem. There is a need to figure it’s many initiatives to get that and make them from the case that’s this should be done in a way that we think sustainability as part of the process one thing is to to be launching initiatives and on that thing is to make sure that’s it that these initiatives are sustainable and can long can be can have an impact on the long term. And I’m not asking them to to talk about these days. Is the lack of it that the lack of meaningful data that we need to focus on we need to take action on if I need to start to put it in front of the nation to start to put it off if the private company need to support an organization who is working with young people also put to young people who is working on. If you’re still picking the question is how am I how we as but not I mean I’m sure that’s what I’m so what is what is going to put on the community what is going to happen to people in that’s on the society in general. So that is a need for but you know people first because. from the, you know, the statistic agencies in every country to provide data that we can post and that we can use in order to make sure that the projects we are working on are going the way we want them to go. At Internet Society, we launched a project in 2020, which is Internet.Beijing, and this project is in two different, I would say two different parts. The first part is to reconstitute the history of Internet in Benin at one point, and the second part is to have a big platform which is communicating on real time with the different Internet Society, Internet providers in Benin. We have in Benin, we have MTN, we have Moon, we have Celtis, and we also have the optical fiber provider like SBN. The idea behind this project is to to have a platform when we see how Internet is working in different areas of the countries, how many people are using them on a daily basis, how many people are using Internet in Kotonou, how many people are using Internet in Baraku, in Natitengu, and also what’s, how are they, I mean, how are they using the Internet? Are they using the mobile phone, or are they using the optical fiber? Are they using the BLM? So these are data we are trying to work as part of this project, and yeah, I think talking about the inclusion of young people again is not something we can tackle as just on one initiative or one project, but it has to be something that we think broadly about, we think sustainably about, and we need to partner together, we need to work together, and make sure that one and other, we are trying to work toward the same

Pavel Farhan:
goal. Thank you. All right. Hi again, this is Pavel for The Record. I guess the benefit of going last is Atif, Jaewon, and Tatiana have pretty much checked all the boxes on what I would like to talk about, but it makes my job easier though. So I am based in Thailand, but I’m originally from Bangladesh, so I’d like to spotlight a little bit about the barriers that we face in Bangladesh, the barriers to digital access, and the participation for disadvantaged and underrepresented groups, especially the youth, which are multifaceted. And these challenges manifest in several ways, but I’ll keep it short due to the time limitations. There are four barriers which I feel are significant to address, the first being the limited infrastructure. So in many rural areas of Bangladesh, the lack of proper internet infrastructure remains a significant barrier. People in these areas often struggle with slow or unreliable connections, or carrier services are reluctant to go and set up phone towers in these areas, because they simply do not have the infrastructure to set them up in these remote areas. The second barrier that I would like to talk about, which is very important to address in the modern world, is the affordability factor. So even though the internet has been a basic human right now, but the cost of internet access can be really prohibitive to Bangladeshis in general, and of course the Bangladeshi youth, particularly those in the low-income and marginalized communities. And even though Bangladesh has adopted many affordable data plants and devices as well, they have only played a percentage of, they’ve only been beneficial to a percentage, and not really covered the broader side of accessibility for the people. And thirdly, of course we have to address digital literacy, because a lack of digital literacy and awareness is a significant challenge. And many individuals, especially in rural areas, lack the skills and knowledge to use internet effectively. There is still, you’d be surprised, some people who forget the internet, they probably have never seen a computer before. And to them, it doesn’t really, it doesn’t affect their lives in any way, but it is for us to go and make them aware that this should be affecting their lives. And finally, the language barrier. So, you know, while Bengali, or I would say Bangla, is the primary language spoken in Bangladesh, not all of the content that is online and available is in Bangla. And although there has been a lot of push for universal access in the last few years, the language barrier can actually limit access to information and services for those who aren’t proficient in English. And if somebody, if a youth or if a marginalized community is okay with not learning English, what can we do, right? This is where we have to come up with these successful strategies, you know, we have to overcome these barriers. And the Bangladesh government has been doing something similar. They have an initiative for the past decade or two decades, actually, it’s called Digital Bangladesh. Similar to what Atif mentioned about Digital India, you know, the government-led program aims to address the digital inclusion by providing the various services and promoting digital literacy. And there’s this one particular project they have, and it’s called InfoLadies. So, basically, it’s an initiative where trained women travel to rural areas with their bicycles and with internet-equipped laptops to provide information and services to local communities. And, you know, this initiative was, I think, started back in 2013, so it’s been 10 years now. And they have played a crucial role in improving digital access and literacy for not just marginalized communities, but for youth who probably would have never been aware of how the internet affects them or how they can contribute to the internet community. And additionally, we also have another project, which is a mobile financial service, such as Bcash. And it has made it easier for people, even who don’t have a traditional bank account, to engage in digital transactions, just to show them that just because you don’t have a bank account doesn’t mean you cannot still use the internet to make transactions with your finances. So, this fosters economic inclusion as well. And, you know, there have been other efforts to improve digital Bangladesh or digital access in Bangladesh, which are ongoing. And, yes, there will be challenges. Challenges still persist, and we have a long way to go. And I think this is probably the third decade that digital Bangladesh has been working on. But we have made some significant strides in bridging the digital divide and promoting inclusivity. But this is still the beginning, and we have a long way to go. Back to you, Roshan.

Rashad Sanusi:
Thank you so much, Pavel, Tatiana, Jiwan, and Atif, for the great contribution. I really enjoyed hearing you. And now we go to the Q&A session. And also, if some people want to share some ideas or insights, we are happy to hear them. I don’t know. You can raise your hand also as well. Okay.

Anna:
We have one question from a community member who’s watching us online, John. And he’s asking, it’s a question to all panelists. And he’s saying, thank you so much for sharing the perspective. And his question is, how can we streamline the cooperation between different sectors to advance the inclusion and participation of young people in internet governance? He’s saying that there were some successful initiatives mentioned by Atif and Pavel about the private sector and government boosting the participation and inclusion issues and how we can advance that and make sure that there is a true multi-stakeholder approach in these solutions.

Pavel Farhan:
I think I can just go ahead and answer this first, and I can pass it on to Atif afterwards. So, if I have to say from an academic perspective, educational institutions obviously play a significant role in reducing the digital divide and nurturing internet governance literacy and leadership among youth. The way I see it, digital literacy education, we talk so much about it. We push so much for digital literacy. But are academic institutions doing enough? Because, yes, they are instrumental in equipping students with essential digital skills. But if you go and check any university, is there a specific course where they’re teaching internet governance? Like, what is internet governance? Do the students actually know what internet governance is? They’re in a world where internet governance is affecting so much of their daily livelihoods, and they’re on the internet 24-7. But how much of the skills do they have to navigate the online world and understand the implications of their digital actions? So, through structured programs, I believe students can actually gain a more proficient understanding of how they should put themselves online. We are lucky to live in a world where research and data is so advanced that academia contributes to our understanding of the digital divide through research and data. There are people who are going around creating surveys and conducting surveys and research, which helps us understand and identify the gaps in internet access and usage, which in return then helps inform the policymakers and organizations how they should structure and make their informed decisions. So, this is another thing, I believe, the research and data capabilities of academia to implement policy development, of course. And finally, as we keep pushing, we talk about youth engagement. So, the universities itself cannot just have a course, right? There has to be youth who want to, out of their own self, want to engage and talk about internet governance with their peers. Academic institutions can provide a platform, and this can actually include hosting forums, clubs, and events related to digital inclusion and governance. And what this does is actually it fosters leadership and helps students understand how they should be advocating for their rights online. So, yeah, these are some of the points I believe can answer the question that has been asked. And I guess, Ape, if you can go ahead and speak a little bit more.

Mohammad Atif Aleem:
Yeah, I think I agree with Pavel. And from an academia point of view, he has, I mean, already stated what needs to be done to foster more participation in order to include people. So, from a private perspective, I think the question was that how can more partnerships be made to cater to inclusion and digital activities. So, I can say that it is not, it is very difficult for any private entity to do on its own. Okay, it can take one initiative. So, for even a company like Google, which is headquartered in the United States of America, in order to implement its Internet Saathi program, it had to consult the government of India. And it had to collaborate with another company called Facebook to implement that, to deploy that program into around 3 lakh villages. So, it’s not an easy task, especially as young entrepreneurs, you guys should seek two different modes of partnerships that is available to you. And that can come through guidance from academia, as Pavel has stressed upon. And it can also come by taking active participation in various internet government schools that are there. There’s a for-profit entity called Internet Society. There’s Internet Foundation, which calls for every six months, it calls for people to submit a project proposal. So, that also answers the question of Joshua, which he has on how to source for funding. I think Tatiana can also give her inputs. But there are many calls from entities like NGOs or like Internet Society. They are hackathons, if you are in the IT industry, from many major IT software firms, which you can project your ideas and you can win some ransom money and take your projects forward. So, there have been these initiatives and I think other people can also put… Yeah, we are running out of the time. So,

Rashad Sanusi:
thank you, Atif. And thank you, everyone. I think for further discussion, we can stay in touch and you can send your question as well for us, so we can see how we can help as well. So, I want to thank Pavel, Tatiana, Jiwon and Atif and all the people who participated to this session. We learned a lot about how we can tackle digital inclusion and digital inclusion is a complex issue, but we can do more for it. And thank you all for the participation and I hope to see you for the future engagement as well. Thank you and have a good day. Bye. Thank you so much.

Audience:
Bye. Bye. Bye. Bye. Bye.

Anna

Speech speed

139 words per minute

Speech length

679 words

Speech time

292 secs

Audience

Speech speed

112 words per minute

Speech length

10 words

Speech time

5 secs

Jaewon Son

Speech speed

160 words per minute

Speech length

1174 words

Speech time

440 secs

Mohammad Atif Aleem

Speech speed

161 words per minute

Speech length

2349 words

Speech time

875 secs

Pavel Farhan

Speech speed

153 words per minute

Speech length

1644 words

Speech time

643 secs

Rashad Sanusi

Speech speed

122 words per minute

Speech length

1009 words

Speech time

497 secs

Tatiana Houndjo

Speech speed

168 words per minute

Speech length

1720 words

Speech time

613 secs

Conversational AI in low income & resource settings | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Dino Cataldo Dellโ€™Accio

In this analysis, several key points and arguments about AI applications in healthcare, the potential of AI and chatbots in low-resource settings, the concept of trust in AI and digital technologies, and the need to establish frameworks for evaluating the reliability and trustworthiness of AI solutions are discussed.

Firstly, the importance of user identification in AI applications in healthcare is emphasised. The use of facial recognition for digital identity is highlighted as an effective solution implemented for the United Nations Pension Fund. This demonstrates how advanced technologies like AI can be utilised to enhance security and streamline processes within healthcare systems.

Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that these technologies have the ability to address resource limitations and reduce inequalities in healthcare access. To support this argument, a blockchain solution designed and implemented for the United Nations Pension Fund is mentioned. The use of blockchain technology can provide secure and transparent data management, enabling efficient delivery of healthcare services in low-resource settings.

The concept of trust is recognised as crucial in AI and digital technologies. It is argued that the public should have confidence in the solutions and entities that offer these technologies. The analysis highlights the importance of not burdening individuals with technological details, but rather fostering trust in the overall solution. Trust is seen as a vital factor in promoting widespread adoption and acceptance of AI and digital technologies.

Furthermore, the need to establish frameworks for evaluating the reliability and trustworthiness of AI solutions is emphasised. The analysis suggests that not all solutions have the same level of reliability, and there is a need to develop criteria for comparing and contrasting different AI solutions. This would enable the identification of trustworthy and reliable solutions that can be implemented effectively. The speaker believes that such frameworks will promote accountability and transparency in the AI industry.

In conclusion, this analysis brings attention to various critical aspects of AI applications in healthcare, the potential of AI and chatbots in low-resource settings, the concept of trust in AI and digital technologies, and the need for frameworks to evaluate the reliability and trustworthiness of AI solutions. It underscores the importance of user identification, the potential of advanced technologies in addressing resource limitations, and the value of trust in fostering widespread adoption. Furthermore, it highlights the necessity of establishing criteria for evaluating and selecting reliable AI solutions, promoting accountability and transparency in the industry.

Olabisi Ogunbase

Digital patient engagement is crucial for maintaining relationships with patients even after they leave the hospital. Platforms like WhatsApp play a vital role in this aspect. WhatsApp is a powerful digital tool that enables ongoing interaction between healthcare providers and patients. It allows doctors, nurses, dieticians, and social workers to provide guidance and answer patient questions. This continuous engagement helps prevent relapses and educates patients about their health conditions. WhatsApp also serves as a platform for passing on education and notices, and as a support system for patients to share ideas and support each other. However, there are some limitations with the WhatsApp platform, such as delays in response and lack of personalization. Implementing AI in healthcare communication, specifically conversational AI, could address these issues and provide real-time, appropriate responses. Collaboration and knowledge-sharing are essential for driving innovation in healthcare, particularly as technology continues to advance. By working together, we can improve digital patient engagement and achieve better healthcare outcomes.

Rajendra Pratap Gupta

Conversational AI is emerging as a promising solution to improve accessible healthcare in low-income and low-resource settings. A study showed that Conversational AI scored 81% in the MRCGP, surpassing human physicians who scored 72%. This highlights the potential of AI to enhance healthcare delivery and bridge gaps caused by the lack of qualified doctors and inadequate healthcare infrastructure. AI in healthcare is aligned with SDG 3 (Good Health and Well-being) and SDG 9 (Industry, Innovation, and Infrastructure).

However, there are concerns about awareness and implementation of Conversational AI in low-resource settings. Some digital health professionals are unfamiliar with its concept and potential applications. This lack of awareness might hinder successful implementation.

Rajendra Pratap Gupta supports using voice-based data through Conversational AI to increase the accuracy and volume of health data, leading to improved healthcare outcomes. Collaboration and a user-centric approach are crucial in AI implementation. Involvement of different sectors, including the private sector, is vital for sustainable business models. The WHO, ITU, and WIPO play significant roles in facilitating AI implementation.

Addressing the digital divide is important, as 2.6 billion people globally lack reliable internet access, hindering effective AI implementation. Efforts should be made to increase internet access in underserved areas.

Education in AI and robotics is necessary, with initiatives in place to develop courses for students and train frontline health workers. This will create a skilled workforce to utilize AI technologies effectively.

The debate on regulation in AI continues, with some advocating for guidelines over over-regulation to maintain flexibility and ethical standards while promoting innovation.

In conclusion, Conversational AI shows great potential in improving accessible healthcare in low-income and low-resource settings. It requires awareness, collaboration, and efforts to address the digital divide and provide education in AI and robotics. Finding the right balance between regulation and innovation is crucial. By addressing these issues, AI can play a significant role in advancing healthcare and achieving the Sustainable Development Goals.

Sameer Pujari

In this analysis, the speakers focus on the transformative potential of technology, specifically conversational artificial intelligence (AI), in addressing existing gaps in healthcare services. They assert that these gaps, particularly in low middle-income settings, can be effectively tackled through the implementation of technology. The argument put forward is that technology, especially conversational AI, serves as an enabler in bridging the healthcare divide.

One important observation made by the speakers is the need for a people-focused, collaborative, equitable, and sustainable approach when integrating technology in healthcare. They emphasize the importance of considering the specific needs of individuals and communities, as well as fostering collaboration between various stakeholders. In addition, they stress the importance of ensuring that the benefits of technology are accessible to all, regardless of socioeconomic status.

The World Health Organization (WHO) plays a crucial role in this conversation by providing guidance and support for the effective implementation of AI in healthcare. The speakers highlight WHO’s efforts in maximizing the value of AI in healthcare through initiatives such as the global collaboration with the International Telecommunication Union (ITU) and the World Intellectual Property Organization. These efforts aim to harness the potential of AI to improve global health outcomes.

Ethics and regulations emerge as important considerations in the implementation of AI in healthcare. The speakers stress the need for ethical approaches to AI development and deployment, ensuring that the technology is used in a responsible and beneficial manner. They also highlight the importance of regulations to provide guardrails and prevent potential misuse of AI. However, it is asserted that regulations should not stifle innovation but instead strike a balance between regulation and technological advancement.

Education and training play a significant role in achieving responsible AI implementation. The WHO offers courses on ethics and governance of AI to promote understanding and ethical approaches among developers, policymakers, and implementers. These courses aim to equip individuals with the necessary knowledge and skills to navigate the complex ethical considerations surrounding AI implementation.

In conclusion, the analysis underscores the potential of conversational AI in addressing healthcare gaps and improving global health outcomes. A people-focused, collaborative, equitable, and sustainable approach is deemed essential in effectively implementing technology in healthcare. The WHO’s guidance and support, along with the development of educational courses, ensure that AI is deployed ethically and responsibly. It is evident that harnessing the potential of AI requires a well-balanced approach that brings together technology, ethics, regulations, and education for the betterment of healthcare systems worldwide.

Mevish Vaishnav

Conversational AI has the potential to revolutionize the healthcare industry by analysing health conversations and generating valuable insights and decisions. This presents an incredible opportunity to gather and analyze health data from billions of people and clinicians, leading to more effective healthcare outcomes. Supporters argue that Conversational AI can be the starting point for generating health AI. By leveraging the power of Conversational AI, healthcare professionals can better understand patient needs and tailor treatment plans accordingly.

Conversational AI also addresses the lack of access to basic health information, particularly in rural areas. Many people living in remote or underserved locations struggle to access crucial information about their health. Conversational AI can bridge this gap by providing easy-to-understand and readily accessible information. Advocates argue that generative AI could eliminate the need for doctors to address basic health problems.

The potential of implementing Conversational AI and generative health AI is widely recognised. However, no supporting facts are provided to elaborate on this stance.

Conversational AI is also seen as a powerful tool in patient engagement and health-related education. The effort required in typing and texting often hinders effective communication between healthcare providers and patients. However, Conversational AI streamlines this process by allowing patients to converse naturally, making them feel heard and fostering a better doctor-patient relationship.

Advocates propose the creation of a global generative health AI group under the stewardship of Dr. Gupta. This group would bring together stakeholders, regulators, policymakers, doctors, hospitals, and frontline health workers to set a direction for all involved. This initiative is supported by the belief that the United Nations, as the largest multi-stakeholder and multilateral body, is in a prime position to facilitate this collaboration. This would promote partnerships and support SDG3 (Good Health and Well-being) and SDG17 (Partnerships for the Goals).

The Academy of Digital Health Sciences is working on a report about generative health intelligence. This report aims to explore the role of generative health intelligence in shaping the future of healthcare. While further details about the report’s content or expected release date are not provided, it is expected to contribute to advancements in healthcare intelligence.

Training and deployment of generative AI in healthcare are emphasized as crucial. Understanding how generative AI works and developing the necessary skills are essential for effectively utilizing this technology. The positive sentiment towards this necessity stems from recognizing the potential benefits of generative AI in improving healthcare outcomes. However, no specific evidence is provided to further support this argument.

In conclusion, Conversational AI has the potential to transform healthcare by analyzing health conversations, delivering information in remote areas, enhancing patient engagement, and facilitating health-related education. The establishment of a global generative health AI group, the training and deployment of generative AI, and the ongoing work by the Academy of Digital Health Sciences highlight the need to fully harness the potential of this technology. Further supporting evidence and details would strengthen the arguments presented.

Shawnna Hoffman

During the discussion, the potential of conversational AI to bridge the healthcare gap was highlighted as a significant advantage. The ability of AI to provide 24/7 assistance and access to healthcare globally, through mobile phones, was emphasized. This can greatly benefit individuals in remote areas or those who may have limited access to healthcare services. The convenience and availability of AI-based healthcare assistance can help address health disparities and provide support to individuals in need.

The combination of AI with blockchain technology was also discussed as an efficient solution during crisis situations. It was mentioned that during the COVID-19 pandemic, an AI chatbot combined with blockchain technology helped locate over 10 billion pieces of personal protective equipment (PPE) within the first 24 hours. This demonstrates the potential of AI and blockchain to rapidly respond to critical needs and find effective solutions in times of crisis.

The importance of fact-checking AI and ensuring its accuracy was emphasized. Even though AI is probabilistic and not always correct, it is crucial to verify the information provided by AI systems. One of the speakers, the president of Guardrail Technologies, highlighted the need to put guardrails around AI and fact-check generative AI to ensure its reliability and accuracy. This point stresses the importance of being cautious and critical when relying on AI-generated information.

The discussion also raised awareness about the issue of internet access and connectivity for AI solutions to be effective. It was mentioned that 2.6 billion people globally lack internet access, which significantly hinders the overall success and reach of AI solutions like chatbots. Ensuring internet access for all individuals, especially those who currently lack it, is necessary to fully harness the benefits of AI and provide equitable access to its solutions.

A holistic approach that considers individual needs, even in remote locations, was emphasized. The experience from an IBM Watson project was shared, where access points were set up in various villages, allowing people to reach these points in half a day and gain access to medical information. This approach recognizes the importance of tailoring AI solutions to meet the specific needs of individuals regardless of their location or resources.

Lastly, the speakers acknowledged the complexity of implementing AI solutions on a wide scale. It was acknowledged that the challenge extends beyond just conversational AI and that the complexity of the problem makes it difficult to implement AI solutions effectively. This realistic perspective highlights the need for careful planning, research, and collaboration to overcome these implementation challenges.

In conclusion, the potential benefits of conversational AI in bridging the healthcare gap, providing 24/7 assistance, and access to healthcare globally through mobile phones were discussed. The combination of AI with blockchain technology was seen as an efficient solution during crisis situations. The importance of fact-checking AI and ensuring its accuracy, considering internet access and connectivity, adopting a holistic approach, and addressing the challenges of implementing AI solutions were all key points discussed during the session. Overall, the speakers expressed optimism about the potential of AI while also acknowledging the complexities and challenges that need to be addressed for its successful integration.

Sabin Dima

Artificial intelligence (AI) is widely recognised as a powerful tool that can replace certain skills, while still acknowledging the importance of human involvement. It is acknowledged that AI can outperform humans in certain tasks, offering greater efficiency and accuracy. Notably, humans.ai, led by the CEO and Founder, has achieved significant milestones in AI development, including creating the first AI counselor for a government and an AI capable of real-time conversations with 19 million Romanians. These accomplishments demonstrate the transformative potential of AI across various domains.

Data traceability and ethics are emphasised as critical considerations in AI development. The CEO’s firm has developed the first blockchain of artificial intelligence to ensure transparency and accountability in AI systems. Additionally, they have contributed to research papers on the ethical implications of AI, emphasising the need to address these concerns.

In the context of healthcare, the CEO argues for a bidirectional approach to AI, aiming to understand people’s problems and provide effective solutions. Emphasising human-like interaction, the CEO advocates for grasping individuals’ problems and urgency. They envision an open innovation platform that fosters collaboration and comprehensive problem-solving.

While technology itself is not the issue, optimising its usage is crucial. The CEO suggests that resources for experimenting with AI projects are readily available to everyone. The focus should be on tackling real-world challenges and driving innovation across sectors.

Furthermore, the CEO asserts that trust can be bolstered in healthcare through the implementation of AI solutions. For instance, the CEO references a project where they cloned a doctor’s voice to send audio messages to patients, enhancing patient care and building trust.

To better understand and regulate AI, the CEO proposes real-world experimentation. By implementing AI solutions in specific regions, regulators can gain insights and make informed decisions on regulations and policies.

The urgency for action and application of AI is evident throughout the discussion. The CEO highlights the readiness of technology and the availability of skilled professionals passionate about AI. Encouraging seizing the opportunities presented by AI rather than merely contemplating its potential is emphasised.

In the conversational AI domain, the CEO suggests making the technology more accessible to underserved populations in low-income areas. By developing efficient models that can run on mobile phones, conversational AI can bridge gaps in healthcare access.

Finally, AI is portrayed as a beneficial tool for employment, increasing productivity and reducing human error. The CEO suggests that AI can supervise performance and mitigate errors, potentially enabling employees to work fewer days while achieving greater results.

In conclusion, AI is a powerful tool capable of replacing certain skills but not humans. The CEO and their firm exemplify the transformative potential of AI across various domains. Ethical considerations, data traceability, bidirectional approaches in healthcare, effective technology utilization, trust-building, real-world experimentation, accessibility, and increased productivity are crucial aspects guiding the application and development of AI. The overall sentiment strongly favours embracing AI to drive positive change in multiple sectors.

Ashish Atreja

Generative AI and AI technologies have the potential to revolutionise the provision of medical care by overcoming the limitations of time and location, extending healthcare access to a larger number of people, irrespective of their physical location. The use of generative probabilistic models in combination with rule-based care plays a crucial role in bridging the gap between scientific treatments and patients’ understanding.

Addressing healthcare inequity requires collaboration and the appropriate use of technology. Inequities exist not only among patients but also among countries, states, and healthcare organisations. Through collaborative efforts and leveraging technology, healthcare can be democratised, ensuring equal access to quality care for everyone.

AI technologies can bridge the digital divide in healthcare. Existing care solutions have the potential to become global solutions if properly validated. Humans play a vital role as transformation agents in bridging this gap, working collectively across silos to ensure inclusivity in healthcare.

Prominent figure Ashish Atreja advocates for a global thought leadership group on generative AI in healthcare. He believes in the power of collective work and engaging with global partners to drive advancements in healthcare systems. Collaborating and sharing knowledge can contribute to the development and implementation of generative AI solutions worldwide.

Conversational AI has the potential to dispel healthcare fallacies by providing accurate and reliable information. However, it is crucial that the technology behind conversational AI is based on validated and trustworthy sources. The FDA has a tiered system for validating health-related technologies based on their potential risk, ensuring their reliability and safety.

To ensure the accuracy and effectiveness of conversational AI in healthcare, an automated or semi-automated governance framework is needed. Currently, there is no specific framework to regulate the validation of conversational AI in healthcare. Establishing such a framework would help maintain the accuracy and credibility of conversational AI, benefiting patients and healthcare providers.

In conclusion, generative AI and AI technologies have the potential to revolutionise healthcare provision, extending care to more people while overcoming limitations of time and location. Collaboration, inclusivity, and validation of technologies are crucial in addressing healthcare inequity and bridging the digital divide. Through collective work, the creation of a global thought leadership group, and the implementation of an effective governance framework, the potential of AI in healthcare can be fully realised, improving outcomes for patients worldwide.

Session transcript

Rajendra Pratap Gupta:
Hi, greetings from Kyoto, and good morning, good evening, and good afternoon, and for some late night. As I start this very important panel discussion on conversational AI in low resource settings and low income settings, let me first give you a perspective on this and how we build up this session. So while we were conceptualizing this very important topic of conversational AI, I did reach out to a lot of my friends who have long time been in digital health, and I must put this through this forum that a few of them weren’t aware of this topic, and which was a big surprise for me. So I think it makes this session all the more important and relevant, because conversational AI is basic digital health. I mean, this is something that we need for the fact that AI is all pervasive, is getting into every aspect of health care delivery, and more than that, what I call is the 80-80-80 rule. 80% of the people don’t have access to health care or qualified doctors. 80% of the areas that we have do not have anything that they can call as health care, and 80% of the problems people have are treatable by probably OTC medications or non-specialist doctors, and that’s where I think our role comes into very importantly. And if you have to talk about affordable and accessible health care, conversational AI is important. While I was serving the union health minister as advisor, I think my boss was very clear, let’s not force doctors to go to rural areas, because they have studied in urban cities for better life, better conditions, and rural areas don’t provide that infrastructure. So even if they go there, what they will do? I mean, coming from that reality, from a country which is of a large population of 1.4 billion, and knowing the effect of what most of these LMICs pass through, I must also relate one experience I had with one of the country heads of IGF who came to our booth and just asked me a question. We’ve been hearing a lot about generative AI. Will it solve our health care problems? And my immediate, instant response was generative AI is based on data. We do not have that. What will it analyze? So if you’re having a very high expectation of saying that generative AI will immediately solve problems, I’m sorry to say that there has to be a baseline of data, there has to be a baseline of clean data for generative AI to work on. So while there is hope and hype, there is a long journey ahead for all of us. With that having said, I also must give you a very interesting example of conversational AI, which is actually chatbots, AI-based chatbots. So in my book, I do mention about this example that there is a very, what do you call, highly respected exam that doctors aspire to pass through. It’s called MRCGP in the UK, member of the Royal College of Physicians. The conversational AI chatbots, they scored 81% compared to human physicians who scored 72%. So I think the evidence is around that there is a future of conversational AI. In fact, there is a present if we deploy it very well, what conversational AI can do. But what we need to create is awareness because so-called LinkedIn leaders of digital health didn’t actually know about it when I reached out to them. They’re all friends. But today, those whom you have on the screen are the actual leaders who understand. Those who are sitting next to me, Dinu and Shauna. So what we are going to do today is to ask people their experience, their expertise and their expectations from conversational AI. And with me, I have Dino Cataldo Dellโ€™Accio, who serves as the Chief Information Officer of the UN Joint Staff Pension Fund and leads the UN Digital Transformation Group. Besides, he has many accolades, but I will just point the one that he got was the UN Secretary General’s Award for his work in applying blockchain technology for digital certification of entitlement process of the UNIJSPF beneficiaries and retirees. Mr. Sameer Pujari, who is among many hats he wears, leads the AI at WHO and is the Chair of AI for Health at WHO ITU focus group. But besides that, he has done a number of things, including Be Healthy, Be Mobile, which is the first mobile app from WHO for chronic diseases. We have Sabin Dima. I think we all in the world of AI and blockchain, I would personally have the highest hope from him given the fact that he’s the first person in the world to merge AI and blockchain together, the founder and CEO of humans.ai. And he’s an entrepreneur who started his first social media at the age of 16. And if you hear him, I bet you that it will change your perspective on what AI and blockchain can do. We have Ashish Atreja. He’s a doctor. He’s a gastroenterologist. He’s currently the CIO and the Chief Digital Health Officer of UC Davis Health. And he’s in pioneering work for digital health. And at least the reason I got him here, he had done phenomenal work during COVID with chatbots, the conversational AI. We have Mavish Vaishnav, the Group CEO of Digital Health Associates, who sits on various government committees for digital health. I’ve been a part of the UN initiative of the Innovation Working Group Asia, where she drafted the roadmap for telemedicine way back in 2013. We have Dr. Olubisi Ongobase, who is a pediatric doctor and quality improvement team lead and mentor. And I’ve seen her work at the World Health Organization. She’s phenomenal and fantastic work that she has done. So what I’m going to do is pass this to my expert panel for their opening remarks on what I have there to say about the conversational AI for low-income and low-resource settings. So I’m going to start with Mr. Dino Cataldo. Dino, over to you for your views on this topic.

Dino Cataldo Dellโ€™Accio:
Thank you very much, Dr. Gupta, for inviting me to participate in this very relevant, very important discussion and topic. So as you kindly introduced me, my background is in practical implementation of technologies such as blockchain, such as biometrics, specifically facial recognition for digital identity, having designed and implemented a solution for the United Nations Pension Fund to support the proof of existence of 84,000 retirees and beneficiaries of the UN residing in 192 countries. So my initial thought in addressing the challenges that conversational AI and chatbot can have and can present in settings with low resources is indeed, and of course I admit my bias here, is first and foremost the identification of the user. As we are looking at potential use cases, we cannot avoid to appreciate the importance that especially in the healthcare sector, when and if there is a relationship between a patient and a system, using this term in a broad sense, that is intended to provide

Rajendra Pratap Gupta:
services, that is intended to provide supported information, it’s critical that the system has the capability to identify who is the end user. Because as we can imagine, the response will need to be tailored and aligned with the specific needs and expectations of the end user. So here comes the concept of having multiple technologies that working together can create a system and a solution that ultimately is able to address the needs of the end user. So the proposition here is, as we discussed in other panels, is here we have the AI, which is a probabilistic technology, jointly working and functioning with a blockchain, which is a deterministic technology. And the two of them, in conjunction, they can complement each other and provide that level of support to provide and to offer and to confirm certainty about identity, certainty and reliability about the data that ultimately the machine learning model are using to elaborate the responses in the conversation. So I think we can start framing the conversation, the discussion, at least from my point of view, by looking at how the joint functioning of this technology can ultimately create a value and a secure solution for the end user. Thank you. Thank you, Dinu. And I think this is Dinu’s call to everyone that those of us who believe in leveraging AI for health, or for that matter, any critical sector, please ensure that the probabilistic technologies have a denominator of deterministic technology. So AI in isolation is probably going to create more distrust unless you start merging it with blockchain. I think this is where this panel is clearly having top global experts who have done groundbreaking work in terms of trying to get both these technologies working together. And in health, we always say that anything you do, the first thing that the user looks at is with distrust. And when you start saying that this technology has a basis of ensuring an identity and reliability, and that cannot happen without blockchain. And this brings me to my next panelist who is, I think for many years that I have known him, is today the man on the mission, the man who leads AI for WHO and the WHO-ITU collaboration. Beyond that, he knows beyond AI, I mean, given his work in mobile health and other standards negotiating with maybe 194 countries for getting people on board for this emerging technology. Sameer, I want to ask you, what has been your experience? What is your vision? What’s the work you are doing in this area? And what can this conversational AI deliver for the LMICs? Over to you.

Sameer Pujari:
Thank you, Rajendra. And thanks for sitting on this forum. I think it’s a very interesting discussion, especially at two angles. One is the conversational AI because the discussions have floated very much towards just generative AI. I think there are two different components to it that needs to be discussed. And second is the low middle income settings. I mean, that’s the key here that we’ll discuss. Let me step back and say, before I go into my experience part, that often in this technology forums, we focus a lot on technology. I would urge everyone to start, take off that hat and think of people. I think it’s very, very important that any discussions we are doing are focused on the people that they are getting the benefit out of. And these are not one set of people, these are the future generations that we’re looking at. These are the current population that we’re looking at. These are care providers that we’re looking at. Everyone has a role and impact in this area of work with AI and conversational technology. So I think technology has to be understood as an enabler. That’s the first point. Now, second point going in, is it really making an impact? What is the challenge and how are we looking at it? Rajendra, you mentioned at the beginning of the session that there are gaps in health care services. Even in 2023 today, we are still seeing a massive gap in rural Africa where women cannot be screened. It’s very expensive to screen for cervical cancer. We are seeing in Egypt, massive gaps in screening of diabetes population. And these are problems which are not existent because of gap in access to health care, but there is a gap in the health care providers proportionately. So technology provides that specific enabling factor, especially the conversational AI. We are even not there at the stage of population health components, sexual reproductive health areas. We are still missing those massive outreach components with the health care gaps. Forget the part of health education and those things, they are way forward. I think that’s where the role of conversational AI is critical. It has been shown through science that in the low-medium country settings, with the technology getting cheaper, there is a potential that these technologies can make a difference across different disease areas in a very effective manner. And our Director General mentioned that very specifically in the July launch of the Global Initiative on AI for Health. However, we have to be very, very careful of four areas. First is equity. And I think that’s where the main component comes into play is when you’re trying to deploy technology in low-medium country settings, the business value for this is much less. And hence, it is us as this forum or the civil society groups or the international development groups who need to be cognizant that we are working in ensuring equity. The technology companies are not going to be pushing for that. I think that’s one part that everyone has to focus on as we look forward. The second part is collaboration. It’s extremely important that we work together across sectors for health, education, and different areas and domains of work. Third is focus on sustainable business models. It’s very exciting to trigger a new product, a new project, and go to the field. And 90% of the times, I’ve seen it die because it doesn’t have a sustainable business model. So that’s a very important component. The fourth point is looking at how it benefits the user at the end of the table. That’s the most important. How can you take this AI to the people? That’s the discussion that has to be happening all throughout. I think if we can focus on this around the people-centered approach, the people focus, we can make an optimal impact with conversational AI in the changing of the healthcare domain across the SDGs indicators, not just the healthcare indicator, but across SDGs. And that’s the key, I think, I would like to see this forum and the members of this in their own role focus on. We have what you asked me earlier, what we’re doing, WHO together with ITU and the World Intellectual Property Organization, heads of the three agencies, announced a global initiative on AI to bring together all this work to kill the verticalization or to reduce the verticalization so that we can work on WHO side providing the guidance, standards, policies on the facilitation side, bringing all the groups together, and then actually helping countries implement that member states level through the right governance approaches that we have. So I think at this stage, I’ll summarize by saying that there’s a huge potential in the healthcare market that our member states are seeing, and they’re asking WHO to work towards that direction. However, as the UN body, we need to ensure that our member states do not get hit by the private sector business models, but we can benefit everyone in the private sector in terms of maximum value of AI in healthcare. Thank you. Back to you.

Rajendra Pratap Gupta:
Thanks, Sameer, and really heartening to hear what most of the people refrain from saying is that we should not get trapped by private sector business models. At the same time, we have some phenomenal people in private sector, and I think to your point of people benefiting, we have next Mr. Sabin Dima, who is the founder and CEO of humans.ai. I really like his approach and the way he’s building the AI verse is that he says you will be able to do anything you can think about with AI. I mean, it is totally disruptive approach that Sabin has. Sabin, it’s nice to have you. I know you are traveling. I would like to hear your views given the work, and I would like you to maybe speak for a minute about the work that you have done in this field and how you are disrupting and what do you see as the role of conversational AI in healthcare. Over to you.

Sabin Dima:
Hello, everyone. Thank you so much for this opportunity. I agree with Mr. Sameer that we need to have a human driven approach. I’m Sabin Dima. I’m the CEO and founder of humans.ai. We are for more than four years in the AI field. In this new AI era, we are a company that already have some world premieres. We created the first AI counselor of a government. We created an AI that it’s able to have a conversation in real time with 19 million of Romanians, and all of those opinions, we’re using them to train an AI. And on the other side, the decision makers can have a conversation with one entity, with one AI, like they’re talking with 19 million of Romanians. We strongly believe that artificial intelligence is the greatest tool ever created, and in order to democratize it, we created an AI framework that makes it so easy to create a narrow AI. I don’t think that AI is able to replace humans, but it’s able to replace some skills. And we can help with that, but we’re taking in consideration two major aspects. One is the data, and for that, we created the first blockchain of artificial intelligence in order to have the data traceability, to create what is called explainable AI, and to make sure that if I’m giving an opinion to this governmental AI, the AI will be trained with my opinion as well. And there is no bad actor that can delete that opinion. And the other one, it’s ethics. We had a lot of research papers on ethics in AI. The latest was presented at the Imperial College in London. Regarding the AI in healthcare, for sure AI is going to democratize access to health, but I see a bidirectional approach. Usually, we are using conversational AI to get answers. We are interacting with the AI. We are asking questions to the AI. But everybody wants to solve people’s problems, but I think we are not aware about those problems. So we should engage in a conversation with people in a human-like interaction like we’re having right now, to understand what are the people’s problems, and probably the most important, what is the sense of urgency? And for that, we’re not asking governments to invest in infrastructure. We’re not asking for governments to invest in hospitals and so on. We need just an internet connection. And in some cases, there are some AI models so efficient that can be encapsulated in a blow-entry tablet, and we can ship it in remote places. So I think that we should build this bidirectional conversation to ask people to be aware of what are their problems, and on the other side, to encapsulate different doctor skills, being able to respond and to create this open innovation platform, that it’s a living organism, that any startups can participate and can bring different skills under the same core.

Rajendra Pratap Gupta:
Thank you, Sabin. Coming to Mewish, Mewish, you have been in this field, writing policies and roadmap for telemedicine, and now this new field of conversational AI. And given the fact that you are involved in academia, which is expected to show the roadmap for the future to those who are into this field, what’s your view about conversational AI?

Mevish Vaishnav:
Thank you, Dr. Gupta. Hello, everyone. I’m Mevish Vaishnav, the Group Chief Operating Officer at Digital Health Associates and Academy of Digital Health Sciences. I thank the DC Digital Health for this extremely important session. While we talk about generative AI and large language models like LLMs, I would say that the basic and the disruptive point for LLMs and for generative AI would be the conversational AI. Just imagine if we had a conversation a scene where billion people are speaking and billions of people talking to patients, populations, speaking about health issues, and clinicians addressing them. It would make a phenomenal opportunity to analyze these conversations and create the DHAI, that is the generative health artificial intelligence, which would be different from artificial intelligence for general purposes because health is very technical. It is clinical. So, I see a great opportunity of conversational AI being the starting point for the generative health AI, which will over time kind of eliminate the need for using doctors for basic health problems because most of the people in the rural settings or in the semi-areas or even urban areas have the need for basic information and this can be handled by conversational AI, which is driven by either generative health AI, but both are dependent on each other. Without this data, we actually cannot do anything. So, I see a phenomenal opportunity and I think we should build upon this. Thank you.

Rajendra Pratap Gupta:
Thanks, Mavish. Moving from what you said is the generative health AI and the fact that when people start interacting over voice, over communication, rather than texting or writing, which limits their ability as, I’ll not call it illiterate, but digitally illiterate populations who have not actually learned to write still. I mean, that’s a major part of the population. In fact, yesterday at a panel, we were talking that 2.6 billion people are still not connected to the internet and with what Sabin was saying, you know, of shipping the tablets to low resource settings. I mean, just imagine if people start talking, you know, the quantum of data that comes out is going to be exponentially more than what we have today because today you have to type, you have to text, that gets captured for analysis. The moment you start analyzing voice-based data is going to be exponential than what we have. So, I think the accuracy will increase and that will become much more worthwhile. But I think at this point, I would like to bring Ashish Atreja, the actual person who has done a lot of work in this during COVID and even earlier. Ashish, what has been your work in this field and how do you see this field shaping up and the role of conversational AI? Over to you.

Ashish Atreja:
Dr. Gupta, it’s a pleasure to be here and thanks for having me. Greetings from California. It’s 1 a.m. here and really excited about this. We just launched the largest network in the United States on generative AI in healthcare called Valid AI. And the reason being, just a very brief background about me, I did my medical school from India and then came to the U.S. to do public health and then informatics. So, as a physician, I’ve been practicing for the first 10 years of my life and now as an informaticist technologist supporting technologies for the University of California Health and now working globally in many things. I still am an adjunct professor in medical school in India. So, I’m considered an app doctor because I started building apps around 15 years ago and these were mostly deterministic models. So, we took the rules from the guidelines and the biggest gap we see is, and very eloquently expressed by the previous speakers, there is efficacy which we see from the medicine, what is possible today, like 99% of patients’ blood pressure can be controlled with the current medicine. But the real gap, which Dr. Gupta mentioned, is 80-80-80. Many patients don’t even get access to the doctor. They can’t even drive to a doctor. The doctors have a waiting list and even if the doctor prescribes a medicine, they do not have time to explain how to take it and how to do other things like salt reduction and others. So, there’s a biggest difference in the care which actually patients get in their home and then what is possible. And that is because most of the human medical care globally, whether it’s United States or Africa or India, is because we have locked medicine and care into the same time and the same space as a physician. So, everything has become physician centered. You have to come into the same clinic or a hospital to get care. What generative AI and AI can do finally is unlock care with the time and space. So, you can provide care anywhere you can and you do not need a physician. You can extend beyond one-to-one physician-centered care to what we call as exponential one-to-many care. If I have to tell the same thing about blood pressure control, I can make myself into a conversational AI bot. Now, with generative AI, within a matter of weeks and I can deliver not only to people which I see in University of California, I can deliver across California, across US, but really I can now deliver across globally. Right? So, any solution now we can make, if that is validated the right way, can immediately become a global solution. So, we are finally at the cusp of unlocking the biggest supply demand issue in healthcare by democratizing it completely. And if you really combine the rule-based stuff, the guidelines to provide rule-based care through text-based, you can then combine that with generative probabilistic. You’re unleashing the science of rule-based care with the conversation which patients need. Because rule-based care is our scientific way of physicians doing it, but conversation is the way how patients get it. And that has always been a barrier how to bridge it, but now with combination of these two technologies, we call it a hybrid AI, you can combine the physician-centered care traditionally with patient-centered care which everyone needs today globally. So, really excited about this. We have all the US states now looking at, you know, and really we need to go with a problem-centered approach first and really looking at equity. The equity is not just in, inequity is not only in patients, inequity is in countries, in states, and in healthcare organizations. If we do it the right way through collaboration, which is really I’m looking for here, we can finally make it the most inclusive, the most democratic way of providing care globally and become, go from digital divide to digital bridge. And I think that onus is on us, not on technology. We humans are the transformation agent to bridge the gap and really it’s a big calling for us to really leverage technology, but put our own DNA and purpose to bridge the gap.

Rajendra Pratap Gupta:
Thanks, Ashish. And this is very important of the fact that you launched valid.ai, I think, at Health in Vegas, I guess it was, as we are here, I think it’s going on parallelly. I move to the next expert panelist, Dr. Olabisi. She is a pediatric doctor and she has done phenomenal work using WhatsApp and others in underserved populations. I mean, coming as a clinician like you, Ashish, she has done phenomenal work. So Dr. Olabisi would like to hear about your work and what’s your suggestions. And I think you have a presentation, so I’ll ask the technical team here to allow you to just share your slides briefly. Hello. Hello. We can hear you, Dr. Olabisi. Please go ahead. I want to thank you very much, Professor Gupta, for inviting me to this forum. So I’m a pediatrician.

Olabisi Ogunbase:
Okay. Greetings to everybody. I work in a general hospital, a maternal and child center, and we see children, you know, mothers bring them to the hospital and they go. So we have no contact with them thereafter. So we thought of how do we continue and ensure patient engagement? You know, how do we ensure that we still maintain, you know, a form of interaction when our patients leave us, you know, so that we can prevent relapses and what have you. So that was what brought us to, you know, thinking of what to do when patients leave us. And that’s how we came about digital technology and what to do when they leave us, how to involve them in their own care. So we thought of using WhatsApp as the means by which we communicate with our patients. Please give me a minute. Okay. So we thought of using WhatsApp in communicating with our patients and, you know, having that relationship with our patients, even when they leave us. So I’ll be taking my presentation in this outline, a brief introduction, definition, objectives, and what actually we do, advantages and no. So WhatsApp is a form of digital technology where we use tools to maintain that relationship and that engagement with our patients. So for us, mobile phones is what we have and mobile phones is what the patients also have. So that’s the tool of digital technology we’re using. So patient engagement is how we are involving patients in their own care and digital means we’re using an electronic means to ensure that. So when we started our objective was, okay, how can we pass information across to our patients? How can we pass notices of what’s going on the hospital to them? How can we educate them beyond the little time? You can imagine in developing countries, there’s so many patients. So you don’t have that much time engaging with them when they come. So what’s all that means? Can we use to pass education to the patient? And the forum also served as a support system because the mothers engage amongst themselves on that WhatsApp platform. And they support one another, they ask questions, they share ideas. And those times we just stay as a fly on the wall, we don’t say anything. But when they now ask us questions, we can now come in and answer the questions. So there are many advantages to this form of engagement, digital engagement using WhatsApp. For us, it’s, of course, optimizing efficiency and unnecessary visits to the hospital. I mean, we can answer some questions, they don’t need to come to the hospital. And so it improves quality of life, it improves patient safety, it improves health outcomes, because we’re still engaging with them, we still have that contact and relationship with them. So there are many advantages. So as of a few days ago, you can see here, there are about 395 participants. And this is just one of the WhatsApp platform. Every clinic has a WhatsApp platform, a dedicated WhatsApp platform. So from the picture, we talk about weight gain and their weight increasing. The mother sent pictures to us about different things about their children. This is one saying the hair on my baby’s head, you know, has gone off, what is happening? There’s on the Leica, you know, what’s happening to my baby’s cord. So they send pictures. So they type questions, they can send pictures, sometimes they even send voice notes. Then doctor, listen to my baby’s breathing. I’m not comfortable, just take it. They record the baby’s breathing and they send it to us. So we are also able to listen to the breathing, we’re able to read their text messages, we’re able to see the pictures they send. So these are various from them. This here is showing the information. Sometimes it’s World Breastfeeding Week, it’s World Pediatric Day, it’s World Hand Hygiene Day. We’re using that forum to educate the mothers on the platform. So these are just examples of, you know, interactions that I picked up from the WhatsApp platform. They ask about immunizations, or they ask my baby has cough. Doctor, what do I do? And I see that we’re able to interact with them. Okay, come to the hospital, do this first aid, let me see you tomorrow. So we’re able to book appointments, we’re able to see them. So we’re able to interact with them. So that aids the experience that they have. So this is pictures we also send to them. This fontanel is normal. This is how you engage better when you’re positioned, better when you’re breastfeeding. And this picture on the bottom right is a picture of the rash. So they send us, doctor, look at what’s on my baby’s skin. What is that? Do I come to the hospital? What do I use? Of course, we don’t really prescribe on the platform, but we can educate, we can inform, we can say, okay, I need to see you in the hospital. come at 10 o’clock, please come at 9 o’clock. So it’s a forum. And these are pictures of their babies that they send on the platform. When their babies are six months, they say, this is my baby. I’ve completed exclusive breastfeeding. They’re excited because we’ve talked about exclusive breastfeeding. So they send their babies pictures. Like I said to you before, they support one another. So when a baby is one year, a baby is six months, they send a picture. All the mothers are congratulating them. Oh, you’ve done well. You’ve breastfed exclusively. I know that we all know here that we’re talking about digital health. We’re talking about breastfeeding is one of the childhood survival strategies. So it’s a big thing for us. So in conclusion, I’ve talked about how at Matana Child MCC at Lagos, we’ve used the WhatsApp platform as one of the digital tools to engage with our patients, even when they have left the hospital. The consultation shouldn’t stop in the doctor’s office. Like the last speaker said, it should still continue beyond the doctor so that we can prevent relapses, we can continue to educate, and all that. So the key words, in conclusion, the key words are digital patient engagement, digital technology, mobile health, using the smartphones that the doctors have, the dieticians have, the nurses have. And this platform is not doctors. Everybody’s there. The nurses are there, the dieticians are there, the social worker. So if the question comes that concerns the nurse, she answers. If it concerns the pediatrician, I answer. Everybody’s on that platform. And it’s really a useful platform for us. So thank you very much for listening.

Rajendra Pratap Gupta:
Thank you, Dr. Olabesi. And I think this convinces us that if you can use WhatsApp to bring such a change, and you get photographs from your mothers who say that this is what the check-in looks like after six months, I think one of the things that you pointed that we don’t prescribe over WhatsApp. But I think what my friend Dinu, who is sitting on my right, has working on technology with blockchain and what Sabin is doing, I think the moment we are able to put the identity within the system, I think the day is not far when I think a perception on WhatsApp may be legal as well. I think that’s the day we should look forward. But I think seeing your presentation, your work that you have done, I think low-resource settings are the high-opportunity settings for conversational AI. I mean, that’s what I would say. And this brings me to my next panelist, Shauna Hoffman. Shauna had led global roles at IBM, Watson, and before that with Dell. She was revered in this field. And she is doing path-breaking work in terms of what she does at GodRate Technologies. Shauna, over to you for what conversational AI can do and what you would say in terms of re-fencing the negatives around conversational AI. Over to you.

Shawnna Hoffman :
Thank you so much, Dr. Gupta, for having me here today. And Dinu, I love sharing the stage with you. There are so many great insights that you have. And I’ve been in artificial intelligence for almost 20 years now. And I have seen it at its best, and I’ve also seen it at its worst. And when Watson won Jeopardy back in 2011, I knew that conversational AI had actually taken the front seat. And so I joined IBM at that time. I led a Watson practice for Watson Legal. And when COVID-19 hit, I was chosen as one of the few to lead our COVID-19 solutions to the marketplace. And we had three. And one of the things I had realized after leading an AI practice, that AI wasn’t enough. And we needed to be responsible. And that responsibility was tracked and traced through blockchain. And so that was the combination of both. Our three products that we brought out to the market within the first three weeks of the shutdown were one of really important. Remember when we couldn’t find masks and we couldn’t find gloves? And it was really a challenge to get the PPE across the globe. We had an AI chatbot combined solution with blockchain to track and trace all of the materials. We found over 10 billion within the first 24 hours. And so it was connecting people all over the globe. I will say that AI, one of the most amazing things for healthcare is that those individuals who can’t often travel to a location to get to a hospital or get to a doctor, often they have mobile phones. And so conversational AI is extremely important for us to get around the globe so that individuals have an opportunity to get forms of healthcare. And maybe it’s unusual, it’s not traditional, but it answers those problems, as our previous guest speaker just said. And I love what you’re doing to really bring that, especially to mothers. I’ve got three kids of my own, and man, did I have a lot of questions when they were little. Because every little cough makes you a little scared as to what’s happened with them. So other solutions that we’ve worked on, of course, the supply chain. And making sure that not only, oddly enough to say, not that doctors are a supply, but during COVID-19, they were really lacking in so many of the areas. And so we were able to move doctors around through, again, our chat bot. The doctors were able to chat to say, hey, we’re available, we’re happy to go anywhere in the globe, and we could connect them with the hospitals that were the most in need. Again, a blockchain solution with AI. You know, conversational AI has such a potential to bridge the healthcare gap. And I would definitely say there are five that we have worked on throughout the years. And I have to say this before I even mention the five. AI has been around since 1956. And the newest, most excitement that I’ve ever seen is really just this past year. And it is when a system that used to cost my clients over $20 million to put in place, that was Watson, is now a conversational AI that is free to the globe. And so we’re seeing a lot of hype, a lot of excitement around there. But do know there’s a lot of use cases over the past 15 years that IBM Watson has been around that they’ve really solved a lot of these problems. And so there is a good company to go back to to ask those questions. I don’t work for them anymore, but there are a lot of us who have that are willing and very willing to share our experiences. If we were to look at the fives, let me jump into those, accessibility. That was mentioned by our previous speakers. Reaching the remote and underserved populations that lack that access to traditional healthcare. Again, access to mobile phones. Many of them have, although we did talk about earlier, yesterday, the two points, and you did here too, the 2.6 billion people that don’t have access to the internet. We need to fix that to be able to give them an opportunity to be part of this global health system. I love the consistency of AI, so 24-7 availability is my second one. It’s extremely important to be able to have doctors, which we’ve done in the past. So Watson had an, we did a lot, even remote surgeries. That kind of gets into robotics. Again, AI is over 90 different components. Conversational AI is only one of them. You can do remote surgeries from one end of the world to the other, and so we had some really amazing things that we saw. But again, that 24-7 availability with conversational AI is extremely important, and it is consistent. I will say, so I’m the president of Guardrail Technologies. One of the reasons that we exist is to put guardrails around AI. AI, as Dino had mentioned, is a probabilistic model. It is not correct 100% of the time. Sometimes it’s even really incorrect. We’ve been working in the medical space in AI. I’ve worked a lot with various different hospital systems in the U.S. I just spoke at one about six weeks ago, and we dove in with 30 of their top physicians to figure out what we needed to do to answer the problem of the AI being wrong and the AI hallucinating. And it could be very scary that it gives the wrong information and could actually cause death. So we need to be careful. We have guardrails. We fact-check the generative AI. That’s part of our program. But make sure that you are fact-checking it because it is going to be incorrect. The best systems out there, because it’s probabilistic, none are going to be 100% correct because of the way the model is, and there’s nothing wrong with that, but we just need to make sure we’re adding that extra layer that confirms that we are fact-checking our information. Education, great educational tool, making sure, as you had seen, that the mothers know how to breastfeed their babies, what the different rashes look like. I love this one. Language and cultural sensitivity is one of my top five because AI can be used to be customized to the local language, the local responses to things. It can be really cool. There’s some AI out there. I just was talking to one of our previous guests, and he had mentioned that they have a movement program that he was in the midst of going and finishing up, working on a patent application for. But as an individual moves, the AI can watch the movement and see what possible types of medical issues that the individual has. There’s some really good language of cultural sensitivity, but then also from there, being able to take that and say, okay, that’s a cultural thing, but then this is just unusual, unique. They may have symptoms for other things. Again, very customizable to the individual. And then my last one, efficient triage, which we can identify urgent medical issues, not, again, 24-7, not having to wait for a doctor’s office to open. So thank you.

Rajendra Pratap Gupta:
Thanks, Shauna. This makes it very interesting to first see those who have been into the clinical side have used it at scale. So there’s no doubt about the effectiveness. In fact, it’s about saving lives. What I said in the beginning, the three ATs, the 80% have no access, 80% can’t afford, but 80% have acute problems. That means they every time don’t need to go to doctor. And these are 80 As, all As, access, affordability, and acute. So the fourth A would be artificial intelligence, of course. But given the fact that we are DC Digital Health and we believe in tangible outcomes for what we discuss here, and that we have taken a topic of conversational AI, so I’ll go back to my expert panelists and ask them that if you had a clean slate and given the discussion that we have with our expert panelists, what would you recommend, Dinu, to you in terms of our pathway for the next one year for this field? Thank you.

Dino Cataldo Dellโ€™Accio:
Thank you very much for that question and also for that call to action. So I think the previous sharing and comment were extremely relevant. I really like the observation on human centricity, the distinction that was made about the gaps, how to bridge the digital divide, the concept of guardrails that Sharna just mentioned. And so here again, I like to talk from personal experience. I’ve been working for the United Nations for 22 years. And my background is actually in auditing. For large part of my career, I was the Chief IT Auditor at the United Nations before becoming the CIO of the UN Pension Fund. So I have a professional deformation on assurance, on evidence. And I think that one of the implicit concept that if I may, all the speakers of this panel have touched upon, but we have not yet made it explicit, is the concept of trust. In order to have attention to human centricity, in order to bridge the gap, in order to enable the human being to approach, to make use of, to be supported by these technologies, I think we need to also build a framework of trust so that they don’t need to understand what is the distinction between a conversational versus a generative AI. They don’t need to understand the distinction between a blockchain or distributed ledger technology. They don’t need to be bothered with those technological details that are often too complicated to explain and to verbalize. They need to just be able to trust the solution and the entities, whether they are private, whether they are public, that are offering the solution. So I believe that it’s incumbent upon us working in this field to come together and start building bottom up, and of course also top down, a framework of generally accepted criterias and principles that can be utilized to then support the reliability, the trustworthiness of the solution and this technology where and if they are indeed implemented in healthcare or for that matter, also another area of our society. So I think that there is that need to start now looking at the fact that I think as we all recognize, this powerful technology can be used for good or for bad, and not all solution have the same level of reliability. So there is a need to start having some sort of criteria that will enable us to start comparing and contrasting, to make assessment and to then providing a level of assurance that I think that ultimately that the human centric approach calls for the user to deserve that what they are going to use is trustworthy and is reliable.

Rajendra Pratap Gupta:
Thank you for raising this very important point of addressing this core issue of the human centricity plus reliability. I think it is a twin opportunity and a twin challenge too. And this brings me to Sameer. Sameer, you lead all the AI initiatives at WHO and WHO is the multilateral body where every government looks up to. I think now there is an excitement across the world for generative AI and AI for health. Everyone is waking up. So what is your advice and the roadmap for the next one year or the action plan, if I were to call it?

Sameer Pujari:
Thank you, Rajendra. And you rightly said all the member states are actually getting very excited about this work, both from a positive side and negative side. And I said negative excitement is a scare or the fear of what damages can do, and the positive excitement is the opportunity. So I think there is, and we see an unprecedented push from member states. Normally, there will be a two-way discussion, but this time member states are actually coming to us and asking, and not just now, since December last year, when actually, charity started picking up the speed. And that’s when it started off. So WHO actually, through this process, has put out a position in the June WHO Bulletin, where we have clearly articulated the value possibilities of generative and discussional AI. And in summary, the one sentence that summarizes that article for everyone here is, is the British position is to be cautiously optimistic and apply right safeguards. I mean, that’s what we are saying here is, we have to be cautious, we have to be optimistic. And as long as we have the right safeguards, and when I say safeguards, it is the ethics approaches. And now, ethics is a very common word. I mean, it’s a moral word, almost to put out and discuss, but I think it’s the application of ethical use, development, and deployment of technology or AI specifically. More importantly, because AI has more power than before, is a critical part. And WHO has guidance, which it’s working with its countries to deploy. So it needs to be not a knee-jerk reaction, but a more sustained governance approach for AI, because AI is here to stay. It’s already with us. It will make a difference in the way things are going forward in terms of healthcare, in terms of development, in general, in terms of education, agriculture. So I think what’s important is, we need to take a detailed, systematic, creative approach in this. in these regards. Also, regulations. I mean, we don’t want to have regulations again becomes the whip for our times for the developers. But what we want to make sure is that regulations are there to safeguard and provide guardrails for the right technology and the right products to be deployed across the domain. And I think this is also humbling this time to see that it’s not just coming from the countries, it’s also coming from the developers, the industry, the private sector. The recent discussions at the US Senate through all the CEOs, where there’s a call for regulating AI through the governments and the UN. I was in Copenhagen just last week where there was discussion of the UN High Level Commission of programs on AI and regulations and governance. And there’s a huge push from the Secretary General on sort of putting this work together. So I think that’s the area where the world is going towards. And we will need to prioritize that. Again, keeping in mind that it has to be people-centered, not technology-centered. So the regulations, the ethics should not be technology-controlled or centered, but people-centered. How is it going to make an impact? And they have to be adaptable for different countries. As you mentioned, 194 countries. There are different stages. And again, for the first time, we’re seeing a rather less gap in terms of preparedness between the high-income and the low-middle-income country settings. I mean, there is some, probably, parity there. But there is still a similarity across the board. So I think it’s important to manage that. Use the power of collaboration. And I think that’s what we’re leveraging through WHO, through ITU’s work. And I think the forum there, a lot of the colleagues from ITU are there to leverage what is existing. As WHO, we’re getting normative guidance, which is science-based, evidence-based, and deploying that science-based ethical regulatory approaches. And my call for this community here, which has a mix of a lot of expertise, many grassroots workers we’re working on, to ensure that the guidance that has been deployed, or the products that are being used, are not technology-centered. It has to be science-centered. And when I say that, it’s the guidance, which is the content that is coming into it, should not be written by the developers. And this happens. Developers pull out Google, take the content. They are more fancy towards the application part of it, not on the content. But I think, as actual healthcare providers, our job is to make sure that the content is governed through the right full mechanisms, the process is right, we’re done. And technology is the enabler, which is a massive, massive boon for the healthcare process. And if we can do that combination right, in the ethical and regulated fashion, pushing towards the right governance mechanism, I think we will have a successful one year of AI. And I hope we can come back next year in this forum and say, we just talked about it in 2023, but 2024, we are making impact. Back to you, Rajen.

Rajendra Pratap Gupta:
Thank you, Sameer. I think in this IGF annual forum, the 18th forum that’s going on, we heard, we had the entire high-level panel that was constituted by the UN Secretary General at this meeting discussing AI. I think one of the things that I saw in this forum is that most of the sessions this time are on AI and native AI, and around various guidelines. And you made this point of consciously optimistic, and also about regulations. But in a technology which is evolving, how can you regulate? I mean, do you think that it is AI that will itself regulate or there’ll be just guidelines which people should follow? And I think the work that Ashish, what Mavesh and others are doing, will that be a good starting point? Because guideline gives you a general direction and doesn’t stall innovation, because regulation to a point can become like a hindrance to innovation. So, I mean, do you think that we should stick to guidelines rather than regulation for now, for next two, three years?

Sameer Pujari:
It’s a great question, Narendra. And I think there’s no blanket answer to it. I think even the European Commission, EU Act is looking at segregating the different ways that we regulate products. And so I think it depends a lot on the solution, on what kind of solution we’re talking about, and what is the impact of that solution to define whether we can work with guidelines or regulations are needed to do that. And I think it’s very centric. So let me give you an example. For tobacco control or diabetes prevention and management, it is a prevention. There is a lot of content which is available. These are healthcare programs which have provided guidance, which don’t have the outreach. I think such simple guidelines, guidance-driven programs for health education, personally, I think can be very quickly distributed if there’s a small mechanism of testing the right content in there. There is a risk on the hindsight that if you don’t control or regulate these content, this can damage by providing misinformation in that, which is a big concern. So there are some products which I think can be loosely regulated, guidelines-driven, but there are some specific areas where cancer screening products, where there are more diabetic retinopathy screening programs where it needs to be regulated in a way. Now, I get this dialogue all the time in discussion whether regulations is over-controlling innovation. And I think that’s the thin line where we have to draw, how does the value? I think the member states or the countries want to use the value. They have seen the problems that they have and how technology can help that. So I think the intentions this time are around more focused on how can we maximize the value of technology, but at the same time, having that regulation is important because without regulation, there’s a massive risk of misappropriation, misuse. So I think the regulation, the level of regulation and the control of regulation needs to be properly adapted and uncharted and defined, but it is important to make sure that we are not transversing ourselves into an open sort of platform where anyone can do anything around healthcare, especially in healthcare. And I ask this question to people, would you say the same thing around when it comes to your financing? Would you allow non-regulated financial digital models to work across the board? And would you be open to doing that? Health is two domains further. So people are more worried about the money than the health, unfortunately. And that’s where the answer comes in. I think it is important to be able to regulate rightfully so we can benefit the value of the technology opportunities at the same time, control or safeguard the damage it could cause in the long run.

Rajendra Pratap Gupta:
But Sameer, even after the Sarbanes-Oxley Act in the financial markets, we had the subprime crisis. We had banks collapse even a few years back, even this year, Silicon Valley bank collapse. So I think even if you over-regulate, we have the outcomes. I mean, wherever there’s money involved, there would be, I think you made this very interesting point in the very beginning, and I really appreciate you for that forthrightness is to not get into the trap of the private sector. So, but I think the experience of over-regulation hasn’t served the purpose. I mean, that has kind of been the government’s way of putting it, saying we are pro-people, so we need to safeguard them, but neither the safeguard the people nor the organization. At the end of the day, the sector bleeds. But I take your point. Coming to the fact of point that you said about people-centricity or people having trust and ethics about it, so I would go to Sabin who is actually building at world scale. Sabin, what I understand from my experience of leading consumer-facing organization is that trust is a matter of value. So, if I get value out of humans.ai, I would love it. If I get value and benefit out of the products and services you roll out in AI, generative AI, anything, that will create trust. So, what would you think about the conversational AI product or healthcare in general in terms of using AI to create that value for creating that trust? Because value is the precursor for trust, not the other way around.

Sabin Dima:
I’m 100% sure that the technology is here. So, it’s not a problem of technology anymore. Even us on this round table, we have all the resources to start experimenting with the project. I believe a lot in learning by doing. I believe that if we will have together as a group, one use case, we will help the regulators to better understand and we can fill this gap between real world and the regulatation of the area. So, I will choose the easiest win that we can get, probably in the aftercare. For example, we have a project with a big pharma company. We saw that in our region, it’s a huge dropout rate. So, people are not finishing their treatments. After three days, they are not finishing their treatments. After three days, they’re feeling better and they stop taking their antibiotics. So, what we are doing, we are cloning the doctor voices because the doctor is the only authority in your life when you’re speaking about medical treatments. So, we are sending audio messages on WhatsApp with the voice of the doctor saying, hey, Sabin, I know that it’s day three and you’re feeling better, but it’s important to finish the… So, what I’m saying is, if we together, we will implement only one solution and we will choose one region, we will learn a lot and we will learn from the real world what were our initial ideas and what the real use case outputs really look like. So, I’m willing to help with our technology and our team and our expertise to create together a real-life use case in conversational AI for healthcare. And in one year from now, we will know more that we know now.

Rajendra Pratap Gupta:
And Sabin, would I take the liberty to say your message on your behalf, what you always say is, for AI start now, do something rather than just thinking about it.

Sabin Dima:
Exactly. The technology is here, we have all the skills. I see a lot of passionate people about the subject. So, we need to start doing.

Rajendra Pratap Gupta:
That’s the best message. And I really remember the line that you said last time that when you heard yourself speak in Portuguese, you were able to actually check what phenomenal opportunities exist before us and the project that you’re doing where you clone doctor’s voice and convince the patients to carry on with the treatment. So, I think the fact is that conversational AI has multiple use cases. And so, one of the things I understand, and we carefully picked up this panel, it was not because of friendship. It was because of the complimentary things that come to the table by thinkers and doers and regulators who are critical to success of conversational AI. So, that’s why we have blockchain, we have AI, we have WHO, we have UC Davis, we have Academy of Digital Health Sciences, we have Shawna. So, this is what is the beauty of this panel is that we should be able to get into something decisive which we can measure over the next one year. Mavish, coming to you, fact that you run a couple of initiatives in digital health, what would your action plan be for the next year?

Mevish Vaishnav:
I believe conversational AI can actually serve as a powerful tool in patient engagement, educating people about the facts behind a particular health-related issue. As you rightly said, Dr. Gupta, imagine the effort that would go in typing and texting, but conversing would actually leave an important and exponential impact. We all know the time that doctor spends with patient is very less, but if we have a conversational AI, patients would be happy that they have been heard. And at Academy of Digital Health Sciences, we are working on the report on generative health intelligence, and we will be releasing it soon, covering all these topics on the role generative health intelligence will play in shaping the future of healthcare intelligence. We will be happy to collaborate with you all, and I would also like to say that within the dynamic coalition of digital health, Dr. Gupta, you should take the stewardship in creating the global group because UN is the largest multi-stakeholder and a multilateral body, and getting everyone under the roof to form a global generative health AI group and leaders where you have regulators, policy makers come together to give a direction to all stakeholders, doctors, hospitals, and the frontline health workers to understand how generative AI work, how to get trained on it, and how to deploy it. We already have a course on digital health at Digital Health Academy, and you can visit the website to know more about it. Thank you.

Rajendra Pratap Gupta:
Thanks, Mavesh. Ashish, over to you after your grand initiative that you launched this week. What are the opportunities for stakeholders to work together? Because the worst thing that happens to health is we all keep doing our work in silos. We rarely connect, forget, hear, and listen, and come together to act. I mean, when I took over as chairman of the Dynamic Coalition on Digital Health at the UNCGF, one of the things I have done over this last one year is to get all the people in the same, I would say, wavelength, pick up a project, and deliver it. So every year, for all the Dynamic Coalitions that at least I chair, we come with tangible outcomes every year. So given your leadership and your pioneering work, what do you suggest we should be doing in the next one year? We heard your previous experts from the position of authority and influence.

Ashish Atreja:
Happy to. I think one of the critical things is I think it’s the onus is on us. There is a very famous map called Gartner Hype Cycle, where it shows about all the technology that comes. There is the hype peak that happens. Then there is a valley of disillusionment, a valley of death. And then there’s a second wave, which comes later on. And generative AI is now at the peak of that hype right now. But we all know there’s a value. So where I would echo is the transformation peak, that is a second peak, that is slower, that happens after the valley of death, is the true peak. And that is one, as humans, we don’t just look at technology, what it can do, but actually we start learning how to use the technology for the right use cases within our workflows in a trustworthy, scalable, scientific manner. So it’s repeatable, replicable, right? And that’s what science role is, right? It takes what one person may say, but actually validates that approach across multiple different variations. So you can be fairly confident, for example, if I give this blood pressure medicine, this is gonna be the impact on it, because it’s been repeated replicable success. So we need to go the same thing with AI, we need to have that lens, similar to what Sameer mentioned, put that scientific evidence-based lens. And then see if something Sabine is doing great, can we replicate that across country? And can we demystify that through a playbook? We call it an implementation science playbook. So through valid AI, 30 health systems, health plans have got together. We have three global partners right now, in Israel, India, as well as in Canada. But our goal working, I love the suggestion which Mavish mentioned is, creating this global thought leadership group on generative AI in healthcare. We love to contribute our collective knowledge from US through valid and coalition of healthcare AI into it. So we can all learn from each other faster. We can also support each other best practices. And also maybe the ecosystem not only just had to be scientific, but also equal input from our key ecosystem partners, including startups, bigger technology, pharma. So we hear from them. So if we have to do a balanced approach, we don’t err on the side of caution necessarily, but err on the side of optimism combined with caution and have feedback from all quadrants.

Rajendra Pratap Gupta:
I totally agree with you, Ashish. I think it’s a great approach to make sure that the excitement is also backed by competence. And for that, everyone needs to work together. And I think Samir, Sabin, Mavish, and Dino has very carefully told that not only these are the challenges, but there are also technical solutions, which are there. And I think the line that Dino put it, which sums up the challenge plus opportunity, probabilistic plus deterministic, as simple as that. And both the solutions exist at scale. I mean, he is sitting where he has deployed, how many countries is this Dino? What do you have done for the pensioners? 192. 192 countries. We have Samir Pujari sitting here, 194 countries. We have Ashish Atreja, 15 systems. Mavish running a course globally. Dr. Olavesi, I’m going to come to her next. You have everything on this current screen, where everyone who is an influencer. at large and a do or both, which is a rare combination. And we know, I think the Gartner’s hype cycle, see sometimes those historical rules and equations also get challenged. We should challenge the Gartner hype cycle and we should actually make it hope and heal cycle. You know, there is a hope, let’s use it for healing. I mean, as simple as that. So Dr. Olavesi, you have heard all those people. You have used technologies and was very impressed to see this six month pictures of the babies. So given what you have heard, what do you need sitting there in Lagos, you know, from people on the screen to take your work to the next level? What should we be doing? What you should be doing? Over to you.

Olabisi Ogunbase:
Thank you very much for that question. With this WhatsApp platform that we have with the mothers, I can see lots of gaps because when the mothers send the pictures or type their questions, it’s not real time. I might not see it at that point in time, but conversational AI, it’s real time, you know, and it’s all the true machine learning, the responses that are appropriate, that are relevant, comes to the patients immediately. Unlike me, it might be hours before I listen to that. So I can see the advantage of what we are doing, but I can also see a lot of gaps. And it’s not personalized, it’s open to everybody, the 300 and so patients on that platform. It’s not personalized, it’s not real time. Sometimes it’s not appropriate, you know, because when they ask a question on cough, I use the opportunity to just talk generally. So that everybody picks something, everybody gains something. Yeah, so I think that’s the next step for me. We have to go away from this platform, which seems so basic to me, and see how we can introduce AI into it and take it to the next level. So let me talk before our session, I was hearing Metaverse, you know, we have to collaborate and take from what everybody has learned. We don’t have to reinvent the wheel. Technology has come far. You know, we’re talking AI, we’re talking conversational AI. We need to collaborate and take this platform to the next level because patient outcome is important. Quality of care is important. Patient safety is important. And these are all issues that conversational AI will have an impact on. So this time next year, I don’t want to be talking about WhatsApp. I want to go for to the next level. So thank you very much.

Rajendra Pratap Gupta:
Thanks, Dr. Olavisi. And I assure you that one of the things that we promise as a tangible outcome of this year’s panel on conversational AI would be to make sure that you next time present how we help you reach the next level. That’s a very big challenge. But if you’re not able to make a difference on the ground, we are a fancy organization and we are not that. We actually mean results and we will do. So that’s why it was the reason to have you given the work that you’re doing in actual LMICs as we would call them. And if you are able to make a difference to your working as a clinician, we would have succeeded in delivering or walking our talk. Otherwise, it’s just a mere discussion, which we will not intend to. Coming to Shauna, Shauna has led a global project and we are very impressed that I know a few years back, the only project at scale for AI was IBM Watson. So Shauna, you have an experience, you have reflections. Given the journey of IBM Watson, given what we are talking now, what would be your guidance for this group for what we are talking about tangible outcomes for next year?

Shawnna Hoffman :
You know, I think that the, kind of my reflections, honestly, are this is an extremely complex problem in general. And it doesn’t have to do with just conversational AI. So as you stand back, look at all of the different aspects that makes the individual vulnerable. I think one of the concerns that I have is something that you had brought up, the 2.6 billion people who don’t have access to the internet, we need to continue to move forward with conversational AI, but we also need to make sure that those 2.6 billion people get access to the internet and to that reliable connectivity to the information. Because if we create all these chatbots and do all this amazing work and they can’t access it, then it’s really not gonna do us much good to really make that big of a difference that I know that you all want to make. I think that that would probably be my main thing that would concern me, that I would probably add beyond what the other speakers have mentioned. Because that complexity really does take it to a really tough level. And we need to look holistically at the individual and what their needs are so that they can get access and what we can do uniquely. One of the things we did with IBM Watson is to set it up in various villages to where everyone would come to one location. So there are opportunities that individuals don’t need just even a cell phone, but providing access where it’s walkable to them within a few miles or even many miles, but at least within like half a day to be able to get access to this remote medical information.

Rajendra Pratap Gupta:
Thank you, Shana. Now we will move to the questions that I see on the chat. What is the potential for training and learning best practices? So on the training side, at least what I can mention about is that we have courses on digital health at every level for doctors. I mean, there’s a postgraduate certificate course. You can look up digitalacademy.health. We have courses for health professionals, but what we are also coming up, which is very interesting is courses on AI and robotics for class eight students with IIT Delhi we have tied up, is that we need to educate people at the bottom, right from class eight onwards where they start learning about it. And this is the elementary course. And then what also we are launching early next year is the frontline health workers course. If they’re not educated, we’re not going anywhere. So that’s on the training side. On the best practices side, I would put this question to Sameer Pujari, given that WHO is probably one of the best platforms to look at, or even the Dynamic Coalition on Digital Health to look at collaborating with Sameer on the best practices on AI for conversational AI. Sameer, over to you. Can you unmute Sameer Pujari, please?

Sameer Pujari:
Yep. Hi, sorry, there was a lapse in the network connection for some reason. But I just wanna mention that on the training part, WHO has converted the guidance that they have created in the last year. And there’s an open WHO course available on ethics and governance of AI. And this is not just a theoretical course. It has a very practical checklist of an approach. And I’ve put that in the chat, the link to the course, which has been taken by more than 17,000 people across 170 countries virtually. So I think that’s one of the solutions, but I think one of the products which is there, we are coming up with a course specifically for developers, because it’s important for this community to understand what it means to create an ethical approach. And this course will be going live by the end of the year. We are having similar courses coming up with the regulations side as well. And these are targeted to developers, to policymakers, and to implementers. So there are checklists and application sort of processes for each of them within this course materials. These are being used by academic institutions across the globe to train students on healthcare provisions and AI. So these are some of the ways that it is there. But again, I keep reiterating the things. It’s not, let’s not recreate the wheel. Let’s join hands, there’s content available, and we can deploy as many ways as we can through the process. Back to you.

Rajendra Pratap Gupta:
Thanks, Sameer. Ashish, over to you. This is an interesting question. Is the role of conversational AI in dispelling superstitions and health fallacies?

Ashish Atreja:
That’s a great one. I think there is a clause that if you’re not intentional about something, then that’s not gonna happen. So which means we do know there is a lot of misperceptions in healthcare. We saw in COVID what happened. And if we just leave at this, like in social media, WhatsApp or others, there’s a lot of chance of things going viral, which are not accurate. And what we realized in COVID was clinicians, researchers actually did not have much voice because most viral content was the one which was the least trusted content from clinicians. So I think part of this is coming back to this stuff is the onus is on us to put science as a base, right? So when technology solutions are created, and because it’s democratizing technology, anyone can, within a week, learn using these technologies to create a bot and to do it. That may not be validated, and many times are not if it comes out so early. So we have to put kind of some framework. One can call it guardrails. If it is very life-threatening things, we have to put very rigorous guardrails. So FDA, Food and Drug Administration in US has a three-point system. It’s a life-threatening system thing that has to go through much more clinical evidence, multiple clinical trials. If it is moderate risk, then certain kind of a thing. If it is very low risk, then it can go without major clinical trial. So we used to have some kind of a framework like that. If it is an education content, can we even use generative AI to validate some of the content which may come out? If we create a generative AI, not on large language models on the internet, because then it will hallucinate, but can we create the large language model on Harrison’s Medical Textbook, which I got trained on? Can we train, get on WHO practices, on VA practices, open domain content from US, UK, developing countries, WHO, wherever it is, on textbooks? Then we actually may have an automated way or semi-automated way to check the accuracy of it, put some delimiters, maybe backing with human in the loop for critical things. So I think that framework is not here right now, but we need to go beyond, Dr. Gupta, as you mentioned, from traditional ways of regulating to actually maybe semi-automated bot ways of regulating. I was in a security summit and gave a keynote there and where it ended was, they’re gonna be more and more bots on the trying to hack information now. So right now, humans do this bots to kind of get into security and hacking. With generative AI, it’s gonna be bots that are gonna be doing it. So we need to dwell bots, which are gonna be protecting us in that. And so the similar thing we have to do, we may not be able to do this governance just by humans alone. We have to go one-to-many and automated governance backed by human’s loop to allow that.

Rajendra Pratap Gupta:
Thanks, Ashish. And I think the point that you raise is very important. The Dynamic Coalition for Digital Health at the UN’s IGF. One of the things I would add to what Mavish proposed was not only the generative health AI, but also generative health AI governance framework. I think if I’m sure there are multiple, but we need to come out with something which is understandable, implementable over the next year or so. I have interesting question that I would post to Sabin is, how can conversational AI technology be made more accessible to people in low-income areas who may have limited access to smartphone or the internet? I know that you did had a passing reference to this Sabin, but you would like to add something on this?

Sabin Dima:
Yeah, at least we need an internet connection if we want to have access to powerful models, but there are models very efficient that you can run it, not on a tablet, but only on a mobile phone. But I see something like the digital doctor of the village that encapsulate the knowledge from all of the doctors from all around the world. And basically you need one mobile phone for every village. So this is the minimum resource that everybody needs. I like another question, to what extent can conversational AI pose a threat to employment? I’m always saying, and I said it before, I think that AI is not going to take your job, but the human using AI will take your job for sure. Probably using AI, employers will work only two or three days per week, and we will achieve 10 times more results. In the same time, you know, that it’s a big problem in healthcare in general, that the human error and AI can supervise this. So imagine that you will do your job having maybe 100 AI assistant helping you perform better. So I don’t see any threat for employment.

Rajendra Pratap Gupta:
And I mean, I will add to that, that in the other dynamic coalition on internet and jobs, we had a session yesterday on Project Create. Create is collaborate to realize the employment and entrepreneurship for all through technology ecosystem. In fact, we have created job maps for nine sectors. And we have talked about the conventional models of doing a business and the Create model. So let’s not fear technology. I think technology is best used for creating jobs than taking away jobs. And this Project Create is about that. So I would say look up this website called projectcreate.tech. We are releasing our framework tomorrow afternoon at IGF on Project Create. So while the threat is not for jobs, the threat is to lack of competence. I would put it this way. So I would say upskill yourself, be competent. If you’re not competent, anyone can threaten you, not only AI. So I would say that please upskill, continuously upskill, crosskill yourself. And that’s important. So there is no threat to you if you are updated and upskilled. Well, if you are not, you certainly have. Sabin, you’re trying to say something?

Sabin Dima:
I agree, I agree.

Rajendra Pratap Gupta:
Thank you. Let’s look at the other questions that are there. Where can we access the recording of this conference? Is there on YouTube, IGF broadcast that on YouTube? So it’s available for people to watch. There is a comment, I guess. I believe youth-mediated initiative would help bridge the digital literacy gaps. Yes, of course. And we have, I think yesterday, we were surprised to have a digital health session by the youth tech envoy of the ITU. And she is keen to work with the DC Digital Health to address this big issue of youth’s involvement in digital health and DC. There’s another question. Ashish’s comment on I am hoping science, which is evidence-based, validated, repeatable, applicable outcomes and transparency ethical approach can help build trust along with great patient experience. Yes, Ashish, totally agree with you. And that is what I think this group should be working on, on the governance and the outcome. So by the way, on the other side with the International Standards Association, we are working on the outcomes measures using technology. I think Dr. Enkasing from my team is going to make a presentation to the meeting in Arlington, I guess it’s next month on the how to measure clinical outcomes of technology driven initiatives. And we are especially talking of digital therapeutics, which is being led by health parliament at the, it’s called the Bureau of Indian Standards, which represents the ISI, the International Standards body. So this was a great session. We are up our time. And I thank each one of you for taking our time and different time zones and enriching us on conversational AI, giving us a pathway for next year. I also thank our technical team at IGF for making this session seamless for us. Thank you all so much. And we will connect back in the mainframe and hopefully next year we’ll come back with the tangible outcomes we discussed. The goal would be Dr. Olavesi should benefit of all we talked. That would be the goal for us. Thank you so much. Thank you. Thank you very much. Thank you. Thank you. Thank you all. Thank you.

Ashish Atreja

Speech speed

182 words per minute

Speech length

1816 words

Speech time

598 secs

Dino Cataldo Dellโ€™Accio

Speech speed

133 words per minute

Speech length

681 words

Speech time

307 secs

Mevish Vaishnav

Speech speed

166 words per minute

Speech length

541 words

Speech time

195 secs

Olabisi Ogunbase

Speech speed

158 words per minute

Speech length

1580 words

Speech time

598 secs

Rajendra Pratap Gupta

Speech speed

185 words per minute

Speech length

5361 words

Speech time

1737 secs

Sabin Dima

Speech speed

161 words per minute

Speech length

1113 words

Speech time

414 secs

Sameer Pujari

Speech speed

190 words per minute

Speech length

2752 words

Speech time

871 secs

Shawnna Hoffman

Speech speed

194 words per minute

Speech length

1730 words

Speech time

535 secs

Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Moderator – Daria Tsafrir

During the discussions, three main topics were examined in depth. The first topic focused on the concerns of the government regarding the protection and safety of critical infrastructures and supply chains. It was acknowledged that governments have a major role in ensuring the security of crucial infrastructures and supply chains, which are vital for the functioning of industries and economies. However, no specific supporting facts or evidence were provided to substantiate these concerns.

The second topic revolved around the risks of over-regulation and the dynamic nature of AI. Participants expressed the need to strike a balance between regulating AI to prevent potential negative consequences and allowing for its innovative and transformative potential. The dynamic nature of AI poses a challenge in terms of regulation, as it constantly evolves and adapts. Again, no supporting facts were provided to further illustrate these risks, but it was acknowledged as a valid concern.

The third topic that was discussed focused on cybersecurity challenges. It was highlighted that addressing these challenges requires collaboration within international forums and the possibility of establishing binding treaties. The need for such cooperation arises from the global nature of cyber threats and the shared responsibility in mitigating them. However, no supporting evidence or specific examples of cybersecurity challenges were referred to.

Throughout the discussions, all speakers maintained a neutral sentiment, meaning they did not express strong support or opposition to any particular viewpoint. This could indicate that the discussions were conducted in an objective manner, with an emphasis on highlighting different perspectives and concerns rather than taking a definitive stance.

Based on the analysis, it is evident that the discussions centered around key areas of government concerns, the risks associated with over-regulation of AI, and the need for international cooperation in addressing cybersecurity challenges. However, the absence of specific supporting facts or evidence detracts from the overall depth and credibility of the arguments presented.

Moderator 1

During his presentation, Abraham introduced himself and verified that he was audible. He provided a comprehensive overview of his background and experience, emphasising his expertise in the field. Abraham highlighted his various roles within the industry, acquiring a diverse set of skills and knowledge in the process.

Abraham also detailed his educational qualifications, underscoring his pertinent degrees and certifications. He explained how these qualifications have equipped him with a strong theoretical foundation, complemented by practical skills developed through hands-on experience.

In addition, Abraham outlined his past work experiences and accomplishments, showcasing specific successful projects and the positive outcomes they generated. He shared examples of challenges encountered during these projects and how he overcame them, displaying problem-solving abilities and resilience.

Regarding communication skills, Abraham mentioned his experience working with multicultural teams and effectively collaborating with individuals from diverse backgrounds. He emphasized his strong interpersonal skills, enabling him to cultivate robust relationships with clients and stakeholders throughout his professional journey.

Furthermore, Abraham mentioned his commitment to continuous professional development, expressing enthusiasm for keeping abreast of the latest industry trends and advancements. He attends relevant conferences, workshops, and seminars, actively engaging in professional networks to stay connected with industry experts.

In conclusion, Abraham presented himself as a highly experienced and qualified professional, highlighting his expertise through his extensive background, educational qualifications, and successful project achievements. He demonstrated effective communication, collaboration, and adaptability, crucial in a fast-paced, ever-evolving industry.

Gallia Daor

The Organisation for Economic Co-operation and Development (OECD) has played a significant role in guiding the development and deployment of artificial intelligence (AI). In 2019, the OECD became the first intergovernmental organization to adopt principles for trustworthy AI. These principles, which focus on the aspects of robustness, security, and safety, have since been adopted by 46 countries. They also serve as the basis for the G20 AI principles, highlighting their global relevance and influence.

The OECD’s emphasis on robustness, security, and safety in AI is crucial in ensuring the responsible development and use of AI technologies. To address the potential risks associated with AI systems, the OECD proposes a systematic risk management approach that spans the entire lifecycle of AI systems on a continuous basis. By adopting this approach, companies and organizations can effectively identify and mitigate risks at each phase of an AI system’s development and deployment.

To further support the responsible development and deployment of AI, the OECD has also published a framework for the classification of AI systems. This framework aids in establishing clear and consistent guidelines for categorising AI technologies, enabling stakeholders to better understand and evaluate the potential risks and benefits associated with different AI systems.

The OECD recognises that digital security, including cybersecurity and the protection against vulnerabilities, is a significant concern in the era of AI. To address this, the OECD has developed a comprehensive framework for digital security that encompasses various aspects such as risk management, national digital security strategies, market-level actions, and technical aspects, including vulnerability treatment. Moreover, the OECD hosts an annual event called the Global Forum on Digital Security, providing an opportunity for global stakeholders to discuss and address key issues related to digital security.

Interestingly, AI itself serves a dual role in digital security. While AI systems have the potential to become vulnerabilities, particularly through data poisoning and the malicious use of generative AI, they can also be utilised as tools for enhancing digital security. This highlights the need for robust security measures and responsible use of AI technologies to prevent malicious attacks while harnessing the potential benefits AI can provide in bolstering digital security efforts.

In addition to addressing risks and emphasising security, the OECD recognises the importance of international cooperation, regulation, and standardisation in the AI domain. The mapping of different standards, frameworks, and regulations can help stakeholders better understand their commonalities and develop practical guidance for the responsible development and deployment of AI technologies.

Intergovernmental organisations, such as the OECD, play a vital role in convening stakeholders and facilitating conversations on respective issues. By bringing together governments, industry experts, and other relevant actors, intergovernmental organisations enable collaboration and foster partnerships for addressing the challenges and opportunities presented by AI technologies.

Finally, the development of metrics and measurements is crucial for effectively addressing and evaluating the impact of AI technologies. The OECD is actively involved in the development of such metrics, with one notable example being the AI Incidents Monitor. This initiative aims to capture and analyse real-time data and incidents caused by AI systems, allowing for a better understanding of the challenges and risks associated with AI technologies.

In conclusion, the OECD has made significant contributions to the development and governance of AI technologies. Through the establishment of principles for trustworthy AI, the emphasis on risk management, the focus on digital security, the recognition of AI’s dual role in security, and the efforts towards international cooperation and metric development, the OECD is actively working towards ensuring the responsible and beneficial use of AI technologies on a global scale.

Asaf Wiener

The Israel Internet Association, represented by Asaf Wiener, serves as the country code top-level domain (CCTLD) manager for the IL, which stands for the Israel National TLD. As the manager of this important domain, the association plays a crucial role in overseeing internet activities in Israel.

Furthermore, the Israel Internet Association is the Israeli chapter of the Internet Society, demonstrating their commitment to promoting various aspects of the digital landscape. Specifically, they focus on digital inclusion, education, and cybersecurity within the country. These areas are of critical importance in today’s interconnected world, and the association strives to bridge the digital divide, ensure access to quality education, and enhance cybersecurity measures for Israeli citizens.

Dr. Asaf Wiener’s organization also works towards addressing digital gaps and advancing public initiatives. This highlights their dedication to narrowing the disparities in access and opportunities that exist in the digital realm. By engaging in various public initiatives, they aim to create a more equitable digital landscape for all.

Additionally, Dr. Asaf Wiener demonstrates a strong inclination towards public engagement and participation. He actively invites anyone interested in learning more about their activities to approach him for further details, indicating a desire to foster collaboration and partnerships in pursuit of their mission.

In conclusion, the Israel Internet Association, led by Asaf Wiener, fulfills the crucial role of CCTLD manager for the IL, representing the Israeli chapter of the Internet Society. Their focus on digital inclusion, education, and cybersecurity, and their commitment to addressing digital gaps and engaging the public, highlight their dedication to advancing the digital landscape in Israel.

Abraham Zarouk

Abraham Zarouk is the Senior Vice President of Technology at the Israel National Cyber Directorate (INCD). In this role, he oversees the day-to-day operations of the Technology division, focusing on project implementation, IT operations, and support for national defense activities. Zarouk also plays a key role in preparing the INCD for the future by promoting innovation and establishing national labs for research and development.

The INCD places a strong emphasis on addressing weaknesses in artificial intelligence (AI). They examine vulnerabilities in AI algorithms, infrastructure, and data sets, and have established a dedicated national lab to enhance AI resilience. Through collaborations with industry leaders like Google, the INCD is actively promoting the use of AI-powered technologies and driving innovation in the field of cybersecurity.

In addition to their proactive approach, the INCD also acknowledges the potential threats posed by AI-based attackers. As the use of AI tools among attackers increases, the INCD recognizes the need to stay vigilant and develop strategies to counter these sophisticated attacks.

Overall, Abraham Zarouk’s role as the Senior Vice President of Technology at the INCD is crucial in ensuring smooth operations and driving the organization’s preparedness for future challenges. The INCD’s focus on addressing AI weaknesses, collaboration with industry partners, and recognition of potential AI-based threats highlights their commitment to cybersecurity excellence.

Daniel Loevenich

Germany is taking proactive measures to manage the risks associated with artificial intelligence (AI) within complex technical systems. The country is specifically focusing on the AI components or modules within these systems. This approach highlights Germany’s commitment to addressing the potential dangers and challenges that AI can present.

To further mitigate these risks, Germany is working on extending its existing cybersecurity conformity assessment infrastructure. This move aims to establish a robust framework to evaluate and ensure the conformity of AI technologies. The country is also striving to unify AI evaluation and conformity assessment according to the standards set by the EU’s AI Act. This step demonstrates Germany’s dedication to aligning its evaluation processes with international norms and regulations.

The implementation of the AI Act is deemed crucial for managing AI risks in Germany. This legislation, which the country is actively working towards, will play a vital role in addressing technical system risks across the entire supply chain of AI applications. By incorporating this act, Germany seeks to establish a comprehensive and effective framework for managing AI-related risks.

Furthermore, Germany is actively promoting the adoption of AI technologies, particularly among small and medium-sized enterprises (SMEs). The country recognizes the potential benefits that these technologies can bring and encourages businesses to embrace them. This approach highlights Germany’s openness to innovation and its efforts to support the growth of AI within its industries.

There is also support for international standardization in guiding the use of AI technologies. This standpoint suggests that by establishing global standards, individuals can have more control over how AI technologies are utilized. This commitment to international cooperation reinforces Germany’s desire to foster responsible and ethical AI practices.

It is important to acknowledge that AI technologies are heavily reliant on data, and their responsible usage ultimately rests on individuals. Germany recognizes the responsibility that comes with the use of AI systems and the need for individuals to exercise caution and ethics when handling data-driven technologies.

Another noteworthy observation is the call for the market to be the determining factor in deciding the use of AI-based systems. Germany suggests that market forces and customer preferences should dictate the direction of AI technology, promoting a more customer-centric approach to AI adoption.

Nevertheless, standardizing AI usage at a value-based level can be challenging due to the differences in societal values. The discrepancy in value-based governmental positions creates a complex landscape for consensus-building and establishing universal standards for AI application. Germany recognizes this challenge and the need for careful consideration of normative and ethical issues surrounding the use of AI technologies.

In conclusion, Germany is actively implementing AI risk management within complex technical systems, with a particular focus on AI components. The country is working towards unifying evaluation processes and conforming to international standards through the AI Act. Germany also promotes the adoption of AI technologies among SMEs and supports international collaboration in establishing standards for responsible AI usage. However, the challenge of aligning value-based norms and standards remains an ongoing concern for AI implementation.

Hiroshi Honjo

Hiroshi Honjo is the Chief Information Security Officer for NTT Data, a Japanese-based IT company with a global workforce of 230,000 employees. NTT Data is actively involved in numerous AI and generative AI projects for their clients. Honjo believes that AI governance guidelines are crucial for the company, covering important aspects like privacy, ethics, and technology. These guidelines promote responsible and ethical practices in AI development and usage.

In the realm of generative AI, Honjo highlights the significance of addressing cybersecurity intricacies, particularly in light of recent attacks on large language models. This underscores the importance of tackling cybersecurity issues within the context of generative AI.

One complex issue in handling data by generative AIs is determining the applicable law or regulation for cross-border data transfers. Similar to challenges faced by private companies managing multinational projects, NTT Data must navigate various regulations and ensure compliance with jurisdiction-specific requirements.

Honjo advocates for international harmonization of AI regulations, emphasizing that guidelines in G7 countries are insufficient. He supports the establishment of international standards that govern the development, use, and deployment of AI, aimed at promoting fairness and consistency in AI regulation.

Additionally, Honjo expresses his concern regarding uneven data protection regulations like the General Data Protection Regulation (GDPR). He acknowledges that differing data protection regulations across countries impose significant costs on businesses. To mitigate these challenges and ensure a level playing field for businesses operating in multiple jurisdictions, Honjo advocates for consistent and harmonized data protection measures.

In summary, Hiroshi Honjo, as the Chief Information Security Officer for NTT Data, emphasizes the necessity of AI governance guidelines, the need to address cybersecurity intricacies in generative AI, the complexity of cross-border data transfers, and the importance of international harmonization of AI regulations. His commitment to consistent data protection regulations reveals his dedication to reducing costs and promoting fairness within the industry.

Bushra Al-Blushi

Bushra Al-Ghoushi is an influential figure in the field of cybersecurity and currently serves as the Head of Research and Innovation at Dubai Electronic Security Center. She has made significant contributions to the industry through her leadership positions.

One of Al-Ghoushi’s notable achievements is the establishment of Dubai Cyber Innovation Park, which aims to promote innovation and collaboration in the field of cybersecurity. Her involvement in founding this park demonstrates her commitment to advancing the industry and creating opportunities for technological development.

Al-Ghoushi’s expertise is also recognized internationally, as she is an official UAE member of the World Economic Forum Global Future Council on Cyber Security. This highlights her contributions to global discussions and initiatives surrounding cybersecurity.

Furthermore, Al-Ghoushi’s extensive involvement in advisory boards, both nationally and internationally, reflects her broad knowledge and the trust placed in her expertise. These advisory roles enable her to shape policies and strategies in the field, further solidifying her thought leadership and influence.

In terms of AI risks, Al-Ghoushi advocates for a gradual and incremental approach to cybersecurity rules and regulations. She emphasizes the importance of identifying and mitigating potential risks posed by AI through appropriate controls and regulations.

Al-Ghoushi also highlights the significance of considering the deployment of AI models and how they impact security controls. She emphasizes the need for addressing the unique risks associated with AI in their development and implementation, ensuring that adequate security measures are in place.

Regarding policy and regulatory approaches, Al-Ghoushi supports a risk-based approach that strikes a balance between control and security issues. She collaborated with Dubai in 2018 to develop AI security ethics and guidelines, which remain applicable to generative AI today.

Al-Ghoushi emphasizes the need for global harmonization of AI regulations and standards. Currently, different countries have fragmented regulations, making compliance challenging for providers and consumers. Harmonization would simplify compliance and instill confidence in internationally recognized AI tools.

To achieve this, Al-Ghoushi suggests international collaboration and the establishment of an international certification or conformity assessment for AI. This would ensure that AI systems meet minimum security requirements and facilitate compliance for providers while enabling effective enforcement of industry standards by regulatory bodies.

In conclusion, Bushra Al-Ghoushi’s leadership and expertise in cybersecurity are evident through her various roles and initiatives. Her emphasis on gradual, incremental cybersecurity rules and regulations for AI reflects a balanced approach that prioritizes both innovation and security. Al-Ghoushi’s advocacy for global harmonization of AI regulations and the establishment of international certification schemes further underscores her commitment to promoting secure and responsible use of AI technologies.

Session transcript

Moderator – Daria Tsafrir:
Thank you very much. Thank you. I think we’re ready to begin. Okay. Can we have our speakers on Zoom on the screen? Perfect. Can everyone turn their cameras, please? Yeah, we see you now. Okay, here’s Daniel. Okay, so good morning, everyone, and welcome to our session on cybersecurity regulation in the age of AI. I’m Daria Zafriar, currently a legal advisor at the Israel National Cyber Directorate, leading legal aspects of AI, cloud computing and international law. Unfortunately, due to the current situation in Israel, my colleagues and I were unable to attend the session on site. So our colleague, Dr. Weiner, who is already there, offered his help in moderating on site. So Asaf, let’s start and then get back to me.

Asaf Wiener:
Great. So my name is Dr. Asaf Weiner. I’m from the Israel Internet Association, which is the CCTLD manager of the IL, the Israel National TLD. And also we are the Israeli chapter of Internet Society, promoting digital inclusion, education and cybersecurity for citizens in Israel. Among other things, working on digital gaps and other initiatives for the public. So I’m not originally part of this panel, so I won’t take too much time to present myself. But I invite everyone who will have any questions or want more details about our activities at Internet Society IL to approach me after the session. And I’ll be happy to introduce myself and our work. And now let’s go back to the original participants of this panel.

Moderator – Daria Tsafrir:
Thank you. So let me ask you, let’s start by introducing yourselves. Let’s start with Dr. Al-Bushi.

Bushra Al-Blushi:
Hello. Good morning, everyone. It gives me a great honor and pleasure to share. the stage with the great panelists and with everyone here today morning. It’s 5 a.m. in Dubai now. So, my name is Bushra Al-Ghoushi. I’m the Head of Research and Innovation in Dubai Electronic Security Center. I’m also the Director General Senior Consultant in the center. So, basically, it’s the center that sets the rules, regulations, standards, and also monitor the cyber security posture here in the city of Dubai. I’m also the founder of Dubai Cyber Innovation Park, which is an innovation arm for Dubai Electronic Security Center. I’m the official UAE member in the World Economic Forum Global Future Council on Cyber Security. I’m also a member of many advisory boards nationally and internationally. Thank you. Mr. Zarouk?

Hiroshi Honjo:
Okay. So, Mr. Honjo, is there? Yes. My name is Hiroshi Honjo. I think I’m only the one based in Tokyo, Japan, but I just come back from Germany, so I still got your luck. So, I’m the Chief Information Security Officer for a Japanese-based IT company called NTT Data, with the employee of the 230,000 globally. Japan is only a small part of the employees, so we have more business in more than 52 countries except for Japan. So, as a private company, we are running so many AI, generative AI projects to our clients, so it’s a very hot topic. It’s a pleasure to talk with you. Thank you.

Moderator – Daria Tsafrir:
Ms. Galia Daur?

Gallia Daor:
Good morning, everyone. My name is Galia Daur. I’m a Policy Advisor in the OECD’s Digital Economy Policy Division in the Directorate of Science, Technology, and Innovation. Our division covers the breadth of digital issues, including artificial intelligence, including digital security, but also measurement aspects, privacy, data governance, and many other issues. But for today, we’ll be focusing on AI and digital security. So, I’ll stop here and I look forward

Moderator – Daria Tsafrir:
to the discussion. Thank you. Mr. Lovanic? Good morning, everyone. I’m Daniel Lovanic.

Daniel Loevenich:
I’m the AI and Data Standards Officer at German Federal Office of Information Security. And I’m very much concerned on AI cyber security standards. Let me just stress that I appreciate to share the stage with you and congratulations to a great event up to now. Thank you very much.

Moderator 1:
Yes, Abraham, I think we can hear you now. So if you could present yourself.

Abraham Zarouk:
Okay, hello everyone. My name is Abraham Zarouk. I’m the SVP technology of the INCD, the Israel National Cyber Directorate. I manage the technology division. So I am responsible for day-to-day operation, such a project implementation, IT operation and providing support for national defense activities. I am also responsible for preparing the INCD for the future by creating R&D activities, promoting innovation, establishing national lab and building national level solution. I have eight kids and they always ask a lot of questions. So I’m already know how JGPT feels. Thank you.

Moderator – Daria Tsafrir:
One will be about the current state of affairs. And the second one will deal with, is there more to be done in the domestic and international levels? So let’s get into it. Now we are all familiar with the cybersecurity regulation toolkit, breach of information, mandatory requirements for a critical infrastructure, risk assessments, info sharing, et cetera. And the question is whether this current toolkit is sufficient to deal with threats to AI systems or to the data are used for it. Now, our goal in this session is getting some insights of what governments can do better and where they shouldn’t be at all. So now, please note that when we talk about regulation, we mean not only regulations, but also government. We mean it broadly, also government guidelines, incentives, and other such measures. So for everyone’s benefit, and so that we can be on the same page, let me turn to Mr. Zarok and ask you, can you please map out for us, from what you learn, the different cybersecurity risks and vulnerabilities related to AI? Mr. Zarok?

Abraham Zarouk:
Again, you hear me now?

Moderator – Daria Tsafrir:
Yes, now I can hear you.

Abraham Zarouk:
Thank you. The INCD focuses on three main domains when addressing AI. The first domain is protecting AI. AI-based models are increasingly being deployed in production in many critical systems across many sectors. But those systems are designed without security in mind and are vulnerable to attacks. Since the average AI engineer is not a security expert, and the cybersecurity experts are not domain experts in AI, we need to find a way to establish and improve AI resiliency. INCD approaches this issue from several angles. One is examining weaknesses in AI algorithms, infrastructure, data sets, and systems. more. This is done as an ongoing task. The INCD promotes R&D project for testing AI models. Unlike ASM, attack surface management, in the IT world, in the AI world, a tailored approach is needed from each algorithm. INCD focuses on common libraries model and dedicated attacks. Another angle is building a robust risk model for AI. We attempt to define metrics and the models to measure risk in AI algorithms. As I mean, to measure and test the robustness of AI as we do with another IT domain. A third angle is the national lab for AI resilience. INCD has established a national lab which develops an online and offline platform for self-assessment of machine learning models. Based on risk model, we develop. The national AI lab is a collaboration between the academic world, the government, and the technology giant. INCD collaborated with the cyber center at Ben-Gurion University, which is a leader in research, and with Google, which brings cloud knowledge in cyber protection and AI. A second significant domain is using AI for defense. Today most tools and products are used from a form of AI, some more and some less. If you don’t have a logo AI inside on your product and you don’t say AI three times a minute no one will buy it. We understand the power of AI and what it can offer and as ongoing effort we make sure our infrastructure and products support the latest AI powered technology. INCD much like many other nations is promoting innovation and the use of AI powered technology. This is since we don’t want to be behind when it comes to the technology. Our role as a regulator is mainly not to interfere but to see where we can assist the market in order is to promote implementation and the use of advanced technology. We use a variety of tools and capabilities to support our day-to-day operations. This includes tools to help researchers in their system, cyber investigation, various automation to assist in analysis and response for incident as a part of our collaboration with Google in the cyber shield project. A smart chatbot for our national cyber call center 119. It’s a reversed 9-1-1 to provide better service to citizens, collect relevant contextual information, provide more focused responses and support additional languages. A new tool under develop aims to help investigate network traffic pickup in an easier, faster and more human way. AI helps us scale and takes care for routine tasks. So in the time of war, AI allows us to direct main power to critical tasks. We use AI to assist and mediation between the human and the machine. Last domain but not the least. and maybe the most complex subject, which is currently in design, is the defense against AI-enhanced, AI-based attacker. We see an increase in the use of various AI tools among the attackers. And we understand that in the future, we will see machines carrying out sophisticated attacks. We are currently in the process of designing a way to approach this threat scenario, which will probably be built from several components working together. In the future, we will see attacks and defense fully managed by AI, and the smarter, stronger, and faster player will win. Thank you.

Moderator – Daria Tsafrir:
Thank you, Mr. Zurich. Dr. Al-Blushi, I’m going to turn to you now. Based on your vast experience in your past and your current work in promoting innovation and shaping policy at both domestic and global levels, what do you make of AI risks? How do you frame it from a cybersecurity regulation perspective?

Bushra Al-Blushi:
So I think in a city like Dubai, we are always at the forefront of technological transformation revolution. Our role as cybersecurity regulator is to enable those critical national infrastructures to use the new technologies, but use it with the right controls around cybersecurity, and it shouldn’t be perfect from the first place. So it’s gradual, incremental cybersecurity rules, regulations that we work together with the business developers just to make sure that the business objectives are being met and also security is being considered. So I will divide what I’m going to speak about into three main points. So the first one is the AI model security themselves versus the security of the consumers of those AI models. So when it comes to the AI security models and the developers of those models, so the rules, the controls, the standards, and the policies are totally different when I’m speaking about the consumers of those AI models. For me, when I’m talking about the AI security itself, the AI model itself, AI at the end of the day is like any other software that we were using in the past, but what makes it different is the risk that it might generate, the way it has been deployed, how it is being implemented. So for example, an AI model that is deployed in an IoT bulb shouldn’t have the same security controls like an AI model that is deployed in a connected vehicles where any risk or any issue in that AI model might impact the human lives. So at the end of the day, it’s where that AI model is being deployed, how it is being used and why it is being used that makes it different than any other software development tools that we get used to develop in the past. This is how AI model itself became different than the normal software development. Then the second point is the security of the AI consumers. So those people, those government entities, the consumers of the AI themselves. I think in our scenario, in our case, we are more worried about the consumers than the producers because we have main players, as we can all see, we have very main players, specifically when it comes to generative AI that are attracting lots of attention or lots of customers to use them. So when it comes to the AI consumers themselves, I think we need to consider many elements. So how that AI will be used, where it will be used, will it be used in a critical national infrastructure? And what about the data privacy of the data that is being used over there? And then also why I’m using that AI model. So I can categorize it as the previous speaker was saying, so I can categorize it into three main areas. How we are using AI today, we might use it to protect as cybersecurity professionals, using it in the new defensive methodologies that we are using, or it can be used by the malicious actors to harm, or the third category, it can be used by a normal users or it can be used by government entities. And in that case, we will be worried about the data privacy of the data being processed in the AI model. So when it comes to the policies and regulations, so I talked about AI security itself, the consumers, and the last point is the policies, standards and regulations that we need to put around the AI models, I think that there has been lots of efforts globally and internationally having OECD AI principles, NIST AI security standard, and then the great bunch of policies that were issued recently in June by EU. I think we are creating progress towards that, having, let’s say, standards or having specific policies around the security of the AI. But as I said, at the end of the day, it’s like the previous software models that were being developed in the past. So if we will think about it, how we should deal with the AI when it comes to the policies and the regulatory point of view, I think we need to develop, first of all, the basic best practices and principles, like any normal software development life cycle, secure by design, supply chain security. So those basic principles should always be there. And then develop one layer on top, and that layer can be specific to the AI itself, how AI should be developed, should be maintained, should be trusted. So another layer which is specific to AI. And the third layer that can be added, as I said, at the end of the day, depends where I’m going to use it. So it’s a sector-specific layer. So we can add banking layer controls, transportation layer controls, medicine layer controls. So this is the third layer where we need to work with the business owners or the business sectors themselves in order to make sure that the third layer also contains enough controls that will enable them to use it in a safe manner. I believe, I strongly believe that that risk-based approach is the best approach where we should all consider, because having too much controls will limit the usage of the AI, and having too loose controls also will take us into other security issues. In our case, for example, we developed an AI security ethics and guidelines back into 2018 that can still be applicable to generative AI. We are also developing an AI sandboxing mechanism for government entities to test, to try to implement AI solutions that they would like to implement at the city level. And also we have clear guidelines about data privacy. So as we are saying that most of the AI models now are hosted in the cloud, so we have a clear model how information can be dealt in the cloud, and that will include AI models that are hosted in a cloud environment. So I don’t think we should reinvent the wheel. We should develop on the basis of the things that have been there for a long time now.

Moderator – Daria Tsafrir:
Thank you, Dr. Alushi, you’ve raised some very important points. I’ll turn now to Mr. Anjou. Mr. Anjou, you’re representing the private sector. So from your organization’s point of view, how are you currently dealing with AI risks and cybersecurity?

Hiroshi Honjo:
Yes. So pretty much close to what Dr. Balushi said. So as a private company, we kind of state the AI governance guidelines within the company, and that includes the privacy and ethics and technology, everything. So basically, what we do for Genitive AI as a company, basically we do everything for client asks. So many clients ask for the, let’s say, application development, for instance. So we take automatic code generations using Genitive AI. That obviously includes a lot of problems, including IP, integration of property issues, that if you learn the code from whatever source, maybe including the commercial code or non-open source code. So that’s privacy protections, well, integration of property protection is the very important thing for the company as well. And also the frameworks include OECD or NIST AI frameworks that helps the defining risks for what the AI project is. So that went pretty much well for defining the risks within the AI project. The thing is, although we kind of state the risks within projects, it’s all come up with what’s the purpose of the project, whether it’s important infrastructure, whether it’s the banking transactions, or whether it’s more likely what’s on the display is a transcript. So it really depends on the risks there. So all the projects are not really the same. As for privacy issues, so a lot of the large language models in market is learning data from somewhere, and you have to learn a lot of big data. It’s not small data. It’s a huge… data, and the question is where does that data reside, and who owns the data? It’s basically, it’s more like the cross-border data transfer issues that, you know, what’s the data source, what’s the use of the data, that’s basically, it’s more like the international transfer. So question is which laws of regulations will be applied to that data. So that’s a bit of cloud issues, same as the cloud issues, so it’s not easy resolutions for that. So basically, we have to deal with all the data along the generative AIs. So that’s basically a lot of privacy or protections, or anything about the cybersecurity, what will happen to cybersecurity also applied to the generative AI. So basically, when you talk about this AI and security or AI guidelines or whatever, you state within private company, it really depends on the, includes the data, privacy, when you get the data compromised, the data source compromised, or the result of the data was compromised, or any breaches happened within the large language model that has been attacked a couple of times. So that’s really the kind of lessons learned, the cybersecurity also applies to the, not all, but part of the generative AI, the projects. So as a private company, it’s not the single company, country level, we just need to deal with the multinational, multi-country level projects that have to deal with the old data, privacy issues, and also the need to protect the models or data where that resizing. So it’s pretty risk models, risk-based management. So it’s all about money. But basically, due to the multinational projects, that’s not easy resolutions. But with the guidelines and some of the lessons, well, things we kind of apply to cybersecurity, things into the genitive AI, kind of resolving some of the issues residing in the genitive AI projects. So but as I said, we have to deal with a lot of the different countries. So that’s our challenges right now. So not technology itself. It’s more like cross-border, well, multinational different regulations. That’s the real challenges for private company. I think I’ll stop here.

Moderator – Daria Tsafrir:
Thank you. That was very interesting. Ms. Daur, I will turn to you now. The OECD was the first, if I’m not mistaken, to publish clear principles for dealing with the IRIS. Could you share with us the OECD’s policy from today’s point of view with an emphasis on the robustness principle? And maybe a word on where we are headed.

Gallia Daor:
Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles for artificial intelligence. So these principles sort of seek to describe what trustworthy AI is. And they have five values-based principles that apply to all AI actors. And they have five recommendations for policymakers specifically. And so within these principles, like you said, we have a principle that focuses on robustness, security, and safety, which sort of provides that AI systems should be robust, secure, and safe throughout their lifecycle, which I think is a particularly meaningful aspect. And the principles also note that systematic risk management approach to each phase of the AI system lifecycle on a continuous basis is needed. So I think it gives sort of the beginning of an indication of how we can apply a risk management approach in the context of AI. So these principles have now been adopted by 46 countries and also serve the basis for the G20 AI principles. And since their adoption in 2019, we’ve worked on providing tools and guidance for organizations and countries to implement them. And so we sort of took a set of three different types of actions. So one focuses on the evidence base. So we developed an online interactive platform that Brinks called the OECD.AI Policy Observatory that has a database of national AI policies and strategies from. from over 70 countries, and also data and metrics and trends on AI, AI investment, AI jobs and skills, AI research publications, and a lot of other information. We also work on gathering the expertise. So we have a very comprehensive network of AI experts, now with over 400 experts from a variety of countries and disciplines that help us take this work forward. And we also develop tools for implementation. So we have a catalog of tools for trustworthy AI. Sorry, I should say, we don’t develop the tools, but we compile them. So we have this catalog that sort of different organizations and countries can submit the tools that they have. And we process that, and anybody can access and see what is out there that can be used. And in that context is where also our increasing focus on risk management and risk assessment in AI. And we already, last year, we published a framework for the classification of AI systems. And others have noted that the risk is very context-based. So the system is not in the abstract. We don’t know what risk it may pose. It depends how we use it, who uses it, with what data. So this classification framework is really there to help us identify the specific risks in a specific context. And we also will soon publish a mapping of sort of different frameworks for risk assessment of AI and what they have in common, and sort of the guidepost, the top-level guidepost that we see for risk assessment and management in AI. So that’s sort of the main focus is AI here. But I do want to say a word about the OECD’s work on digital security, which is our term for cybersecurity in the economic and social context. So we have an OECD framework for digital security that looks at four different aspects. So it has sort of the foundational level, which is the principles for digital security risk management, general principles, and operational principles for how to do risk management in the digital security context. It also has a strategic level, so how you take these principles as a country and use them to develop your national digital security strategy. We have a market level of sort of how we can How we can work on sort of misaligned incentives and in the market including sort of information gaps and to make sure that that both products and services are safe or secure sorry and also that in particular and others have mentioned that AI is now increasingly used in the context of critical infrastructure and critical activities So we have a recommendation on the digital security of critical activities And the last level is a technical level where we focus on vulnerability treatment and including sort of protections for vulnerability researchers and good practices for vulnerability disclosure, and I think so that this leads and maybe I’ll stop here but I think that the others have said about the the intersection between AI and digital security, which is really the heart of today’s conversation and we sort of see that Like the the first intervention by Mr. Zarouk said so we see that we need to focus both on the digital security of AI Systems. So what do we need to do to make sure that AI systems are secure? so in particular sort of looking at vulnerabilities in the area of The data that is used of data poisoning and how that can affect the outcomes of an AI system But we also need to think about how AI systems made themselves be used either to attack so generative AI is maybe somewhat of a game-changer and in this aspect too, so we we know for example the generative AI can be used to Produce very credible content that can then be used at scale and phishing attacks, for example And also, you know, there’s less work that we have not yet done But sort of how how AI systems can be used to enhance digital security, so I’ll say just one word on that that we have at the OECD We have the global forum on digital security, which is for prosperity which is an annual event where we bring different stakeholders from a very large range of countries to talk about the sort of hot topics in digital security. And the event that we did earlier this year jointly with Japan focused exactly on sort of the link between digital security and technologies and with AI obviously being one of the key focus. And that was exactly one of the themes of our discussion there. So I’ll stop here, but thank you.

Moderator – Daria Tsafrir:
Thank you, Kalia. I can share with you that Israel has adopted the OECD principles into its guideline papers on AI. At the moment, the guidelines are non-legally binding, and the current demand is for sectoral regulators to examine the existing need for a specific regulation in their field. But I imagine we will be soon looking into the AI Act as well. So now I’ll turn to Mr. Lovanic. Could you share with us Germany’s policy regarding cybersecurity and AI? How, in your opinion, will AI Act affect Germany’s policy and regulation? How will you implement it into your law system?

Daniel Loevenich:
Yeah, it’s a very difficult question. Challenging. Since AI Act is, as you know, it’s brand new. But indeed, we in Germany are very much concerned with the European perspective of AI. And just let me stress the fact that especially on the EU level, the union and the standardization organizations like SunCenter, like JTC, TwinOne, as you know, do a great job on that. They very much focus on the 10 issues addressed in the AI Act standardization. And we in Germany are very much looking forward to implement procedures and infrastructures based on our conformity assessment and especially certification infrastructures. To implement the technical basics for our conformity assessment of these standards. But first of all, let me stress the fact that if we say AI risks are special risks to cybersecurity, then we always have in mind the technical system risks, like, for instance, a vehicle. And we always have in mind the technical risks And we address the, especially for embedded AI in such a technical system, we address all these risks based on our experiences with engineering and analysis of these technical systems. Or in case of a distributed IT system with the whole supply chain, in the background, we have special AI components or modules as, for instance, cloud-based services that play a key role for the whole supply chain. So we address the risks in terms of the whole supply chain of this application. And it’s very important to be aware that when we, in Germany, consider AI risks, we have to concentrate on these AI modules within that complex systems. And we do that just by mapping down these application or sectoral-based risks, which may be regulated, of course, by standards, down to technical requirements for the AI modules that are building. And of course, we have a lot of stakeholders being responsible and being competent to address these risks. And they are responsible for implementing the special AI countermeasures, technical countermeasures, within their modules during the whole life cycles, as we heard from the speakers already. And this is where we do concentrate, especially in Germany, but in the EU. The overall issue is to build a uniform AI evaluation and conformity assessment framework. Bloomberg, independently of who’s responsible to implement the countermeasures effectively working of them for the cybersecurity risks. And this is a European approach. It is number one key political issue in the German AI standardization roadmap. So if you ask me what we do next, yes, on the basis of existing cybersecurity conformity assessment infrastructure, like attestation, second party or third party evaluation, certification, and so on, we try to address these special AI risks as an extension to the existing frameworks, implementing the EU AI standardization requests. Does that answer your question, basically?

Moderator – Daria Tsafrir:
Thank you. Thank you so much. And you actually brought me directly to the second round of our session, which is what’s missing and what we can do better. And as some of you mentioned already, one of our major concerns as a government is the protection of safety and safety of critical infrastructures. And as a result, chain supplies. And recently, we are also looking into SMEs. So I have two questions, if you could address it shortly. One is, what should governments be doing in the regulatory space to improve cybersecurity of their systems? And when we talk about regulation, I think we need to address two subjects. One, we need to consider the risks of over-regulation. And we also need to think, is AI dynamic? Maybe it’s too dynamic for regulation. And the second question is, how much of the challenges should be addressed within international forums, including maybe binding treaties? So if you could address these questions, and maybe an idea or an advice for the future, if you have one, then I’ll be glad to hear it. So I think we’ll keep the same order. So we’ll start with Dr. Al-Ghloushi, and where we can go on.

Bushra Al-Blushi:
Yeah. I think I will take it from the international perspective. As we can see today in the current land of many AI acts being developed and being issued by different countries and it’s totally fragmented. And it’s very difficult for both providers and consumers to adopt at the end of the day. So assume that I’m providing those services or AI models in 100 countries and I’m having 100 acts, to which one should I comply with? Shouldn’t we harmonize or shouldn’t we come at least with the minimum requirements for conformity assessment or for compliance that will make it much easier for the producers to comply. And at the end of the day, we’ll give also the consumers that confidence that this AI tool is being internationally recognized by multiple countries. So that fragmentation, as I said, it makes it really difficult for both consumers and providers to comply with. The international collaboration and the harmonization of AI standardization, the compliance to the requirements to address those challenges. And actually this was one of the papers that we published last year with the World Economic Forum calling for a harmonized international certification scheme for different things. AI was not part of it, but at least it addressed the idea how the harmonization should be done, what are the minimum requirements. I’m not saying that it’s the full certification that the country should rely on, but at least it’s the minimum requirement certification or the minimum requirements conformity assessment that will makes it easier for the providers to comply with and will make also our role as regulator much more, let’s say, less than having different standards, different requirements, different, let’s say, acts in different countries. This is in a nutshell, I think harmonization of international requirements is very important in order to move forward with the different AI acts that we have today.

Moderator – Daria Tsafrir:
Thank you. Mr. Rangel.

Hiroshi Honjo:
Yeah, Dr. Bruce, you said almost what I want to say, but basically. As a private company, we need international harmonizations for all the regulations. On this keynote speech of IDF, our Japanese Prime Minister Kishida-san said there will be AI regulations, guidelines in G7 countries. It’s OK, but that’s not enough. So there are more countries. So we need at least minimum requirements, minimum harmonization to run the business across the multinational countries. So I’m kind of looking forward for that. But what I don’t like to happen is what happened to the data protections, GDPR. Some countries have very strong regulations. Other countries have very soft law. And that’s a private company that costs a lot. So I hope all the things harmonized within AI. So I’ll stop here.

Moderator – Daria Tsafrir:
Thank you. Ms. Bauer?

Gallia Daor:
Thank you. Yeah, so I think we’ve heard a lot about the fragmentation issue. And obviously, that’s a serious issue. So I think it’s difficult to talk sort of in the abstract about whether we should or shouldn’t have regulation or because these things are happening. So I think it’s also to talk about what we do with this. And I think from the perspective of an international organization, I think we can talk perhaps about sort of three roles of intergovernmental organizations and what they can do to help countries and organizations in this situation. So one thing is sort of looking at the mapping the different standards and frameworks and regulations and all these things out there and sort of trying to identify. identify commonalities, I don’t know, perhaps minimum standards, and sort of develop some sort of a practical guidance from that. But I think another important role is the ability of intergovernmental organizations, and we see that here today, to convene the different stakeholders from the different countries and from the different stakeholder groups to sort of flag their issues and have that conversation. And perhaps a third aspect is to advance the metrics and measurement of some of these issues that are very challenging. And so in the context of our work on AI, we’re developing, and we will launch next month, an AI incidents monitor that sort of looks at real time, live data to see what actual incidents AI systems cause in the world. And I think that’s maybe one step to advance that issue. Thank you.

Moderator – Daria Tsafrir:
Thank you. Mr. Lovnic?

Daniel Loevenich:
Yeah, we in Germany want to open markets to new technologies. We want people to be creative with AI technologies. We want SMEs to be on their way to use these technologies and even to develop new ideas with these technologies. So we really don’t want to prescribe things. We just want to recommend people and organizations to do special things. So basically the, and obviously the first and overall instrumentarium for this is international standardization so that people can decide on different issues and their own risks and requirements to use technologies in special ways. ways and not to use them or to misuse them in other ways. Please allow some remarks on that standardization issues, especially on the ISO level. My experience is they are a lot of people involved. Many of them are AI experts, but I can distinguish three schools of thought. Technical, sectoral means application specific in contradiction to the technical application agnostic of you and the normative and ethical things on the top. It’s nothing new. It’s three different aspects of AI technology since they are data driven. We have data in these systems and they are used as machine understandable data, not readable data, but understandable data. So people are very much responsible in using these technologies for specific purposes. Now then, if you have appropriate standards and speaking of harmonization, you can do this on the technical level, like ISO does, like Sansemelet does, like other people do. It’s very easy. If you come to application specific requirements, you can standardize that. In Europe, we have ATSI for instance, or ITU for the normative. for the health care sectors. Very effective. You can do that. And you can do it even on the application and sectoral-specific levels. You can do regulation if you want, but let the market do it. Let they decide this is use of our AI-based systems. And let the market and the customers decide, I want to use this technology in that way that is regulated by blah, blah, blah. The third school of thought or level is very much specific on value-based things. There are society and all these kind of organization and digital serenity and other aspects that play a key role in that. In the EU, for instance, you have 27 nations, if I’m right, with probably 27 different value-based governmental positions on that. So it’s very, very difficult. Our time is coming to an end. Yeah, I’m going to stop here. But this is the difficult part. Yeah, it was very interesting.

Moderator – Daria Tsafrir:
Yes, thank you. I did steal back our five minutes, I have to say. But well, anyway, time flies when you’re having fun. And our time is unfortunately up. So I would like to thank you all for participating. And I know some of you had to wake up very, very early in the morning. So I really appreciate your effort. It was very interesting and very enlightening. And I hope to see you soon, maybe on the follow-up session.

Abraham Zarouk

Speech speed

103 words per minute

Speech length

819 words

Speech time

475 secs

Asaf Wiener

Speech speed

133 words per minute

Speech length

133 words

Speech time

60 secs

Bushra Al-Blushi

Speech speed

165 words per minute

Speech length

1650 words

Speech time

601 secs

Daniel Loevenich

Speech speed

98 words per minute

Speech length

1083 words

Speech time

664 secs

Gallia Daor

Speech speed

163 words per minute

Speech length

1509 words

Speech time

554 secs

Hiroshi Honjo

Speech speed

98 words per minute

Speech length

942 words

Speech time

578 secs

Moderator – Daria Tsafrir

Speech speed

143 words per minute

Speech length

1024 words

Speech time

429 secs

Moderator 1

Speech speed

164 words per minute

Speech length

17 words

Speech time

6 secs

Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Irina Soeffky

The Indian G20 presidency deserves commendation for its focus on digital public infrastructure, emphasizing the importance of integrating technology into public infrastructure. Germany is also actively contributing to the development of digital public services through projects such as EID (Electronic Identification) and the EU-wide ID, aiming to enhance digitization across various sectors.

Irina Soeffky, a supporter of international cooperation for digitalization, recognizes the need for collaboration in this field. The Federal Ministry of Economic Cooperation and Development, in collaboration with GIZ (the German Corporation for International Cooperation), is actively providing open and interoperable elements to countries. Their goal is to assist countries in building their public infrastructure and fostering cooperation in digitalization, highlighting the importance of international partnerships.

The Gafstag Initiative, a noteworthy project, promotes interoperability and openness in public infrastructure. It is remarkable for creating public infrastructure that is not only interoperable but also reusable, enabling new business possibilities and fostering innovation in digital public services.

In conclusion, the Indian G20 presidency’s focus on digital public infrastructure and Germany’s contributions through projects like EID and EU-wide ID emphasize the significance of digitization in various sectors. The support of Irina Soeffky for international cooperation in digitalization and the Gafstag Initiative’s efforts to promote interoperability and openness further reinforce the importance of collaboration and innovation in building digital public infrastructure. These initiatives collectively contribute to the advancement of technology and digitalization globally.

Audience

The analysis of the implementation of digital public goods (DPG) and digital identity systems highlights the need for a coordinated and inclusive approach. It stresses the importance of instilling the DPI mindset in policymakers and leaders, especially in terms of championing successful implementations like Amado. The analysis also points out coordination problems within governments, such as turf wars and a lack of unified effort, which hinder the implementation of DPG and digital identity systems.

To overcome these challenges, the analysis suggests starting with a use case and building upon it in a way that allows others to easily plug into the system. Emphasizing minimalism can also contribute to a more effective approach.

Learning from both successful and unsuccessful implementations is crucial. The Indian experience is particularly highlighted, where a digital identity project was implemented without a legal framework that adequately protected data rights. The reliance on a centralized, cloud-stored biometric database proved to be problematic. By examining this case, valuable lessons can be learned to avoid similar mistakes in the future.

The analysis also addresses the issue of digital identity misuse and exclusion. Insights from experiences in India, Kenya, and the Philippines can inform the efforts of communities like the Digital Public Goods Alliance (DPGA) in mitigating these issues. It recommends involving human rights groups in the consultative process to ensure a more comprehensive and inclusive approach.

Furthermore, the analysis draws attention to the high failure rate of identity systems in India and its impact on public welfare delivery. Low levels of digital literacy play a significant role in these failures. A bottom-up approach for the redressal mechanism of digital identity systems is proposed to address these challenges.

Additionally, the necessity of user choice in dealing with system failures is highlighted. Allowing users the option to switch to human assistance when there is a digital verification failure can especially benefit regions with low levels of digital literacy. This user-centric approach ensures that individuals with limited digital skills are not excluded from the benefits of digital public goods and services.

Overall, the analysis emphasises the need for a coordinated and inclusive approach in implementing digital public goods and digital identity systems. It highlights the importance of the DPI mindset, learning from past experiences, mitigating harm and exclusion, involving human rights groups, adopting a bottom-up approach, and providing user choice. Following these principles will help achieve effective and secure digital public goods and identity systems.

Adriana Groh

The concept of Digital Public Infrastructure (DPI) extends beyond immediate innovations and necessitates a focus on the robust and ongoing maintenance of software components. It is concerning that 64% of the 133 most widely used software components are in critical shape and are only maintained by a few individuals. These software components are not only critical but also vulnerable, posing a significant risk if they were to break, as they are extensively used in our day-to-day lives.

Adriana, a strong advocate for a holistic approach to DPI, emphasises the importance of securing and maintaining these underlying software components, which often go unnoticed. She underscores the need for open digital-based technologies and the dependence on open-source software that operates in the background and is continuously maintained and available to support the functioning of various systems.

Interoperability and adaptability of public software and digital technology are also vital aspects that Adriana highlights. She suggests that sharing and learning together will help achieve this goal and points out the need for interoperable software and digital technologies. Additionally, she mentions the concept of ‘public money, public code’, asserting that software financed by taxpayers’ money should be open and adaptable.

Adriana further argues that redundancy in digital base technologies is indispensable to prevent single points of failure. She explains that having similar tools running concurrently ensures that if one fails, alternative routes can be taken to ensure uninterrupted functionality. She employs the analogy of a road, emphasising the significance of having multiple routes to reach a destination.

Moreover, Adriana emphasises the necessity of international cooperation in addressing global digital challenges. She highlights the potential risks of the ecosystem being torn apart without international cooperation and underscores the need for well-coordinated efforts. She draws attention to the “tragedy of the commons” in the context of the digital public commons, where everyone relies on it but no one feels responsible. This further underscores the importance of international cooperation and shared responsibility.

In conclusion, the concept of Digital Public Infrastructure encompasses the maintenance of software components beyond immediate innovations. Adriana advocates for a holistic approach, with an emphasis on securing and maintaining underlying software components, promoting open digital-based technologies, interoperable and adaptable public software, and redundancy in digital base technologies. International cooperation is crucial in tackling global digital challenges and ensuring well-coordinated efforts in the digital public commons.

Valeriya Ionan

The speakers provide valuable insights into the digital transformation in Ukraine, emphasizing the importance of creating a conducive environment that allows all participants to work efficiently within the digital ecosystem. The success of the DIA app in Ukraine is highlighted, with its impressive user base of over 19.5 million users. The app is not limited to digital documents and online government services; it also focuses on streamlining workflows in both the public and private sectors.

Furthermore, the government of Ukraine is commended for its effective collaboration with startups, private companies, and civil society. This demonstrates the need for governments to adopt an agile and flexible approach, operating more like IT companies. The speakers advocate for the implementation of Chief Digital Transformation Officer (CDTO) positions within governments to expedite the digital reform process across various levels and spheres.

The importance of public-private partnerships is emphasized as an effective means to enhance digital infrastructure and literacy. The creation of the DIA Education platform in collaboration with the private sector and civil society serves as an example of how such partnerships can contribute to improving digital literacy. The platform focuses on equipping individuals with the necessary digital skills and knowledge.

The speakers also highlight the value of learning from successful digital transformations in other countries. Ukraine draws inspiration from Estonia’s digital transformation and actively incorporates their GovTech products and experiences. This approach encourages governments to leverage existing successful solutions rather than investing time in seeking new ones.

Effective communication and collaboration between the government, civil society, and the private sector are seen as crucial for the progress of digital transformation. The establishment of platforms that facilitate the exchange of digital products and experiences is recommended. This allows for the sharing of best practices, knowledge, and experience across regions.

Additionally, the speakers stress the importance of world-class education programs catering to digital leaders. They argue that such programs should not only provide knowledge and expertise but also offer networking opportunities. Currently, there is a lack of academic or non-academic programs specifically tailored to preparing individuals for Chief Digital Transformation Officer roles within governments.

The speakers emphasize the need to view digital transformation as a comprehensive system rather than isolated initiatives. This holistic perspective ensures that all aspects, including user-centric and human-centric services, are considered. Building a digital country is not just about technological advancements but also about inclusivity and improving the basic level of digital literacy.

This analysis provides valuable insights into the digital transformation efforts in Ukraine, highlighting the successes achieved through the DIA app, effective collaboration between the government and other stakeholders, the importance of public-private partnerships, and learning from other countries’ experiences. The introduction of CDTO positions and the need for communication platforms and world-class education programs are also emphasized. Overall, the speakers’ arguments shed light on the essential factors and strategies required for successful digital transformation in Ukraine.

Mark Irura

The analysis highlights several important points regarding the development and maintenance of digital services and infrastructure.

One key point is the need for a community approach. This involves developers putting in their intellect and energy to build these services and maintain them over the long term. Open source development is seen as crucial in enabling the developer community to contribute to long-term development. The argument is that there is no community in between the demand and supply to be able to innovate around packages of reusable, interoperable components.

On the topic of governance, it is emphasized that the government is responsible for maintaining the vision and foresight of the digital platform in the long term. The regulation aspect or the vision or the foresight cannot be delegated by the government or funders. This illustrates how governance plays a vital role in implementing digital public infrastructure, with proper procedures required to address issues that cannot be solved by technology alone.

Long-term planning and examination are necessary for the successful implementation of digital public infrastructure. The pressure to show immediate results can hinder the progress of digital projects. It is more beneficial to think in longer terms and allow time for the development and improvement of these digital infrastructures. Data sharing across agencies also requires a long-term viewpoint to understand its implications. This highlights the need for long-term planning and examination for the implementation of digital public infrastructure.

Another important aspect is the development of skills to support the procurement of digital public goods. The government should develop skills to handle the procurement of these goods, which can help lower their total cost of ownership. Additionally, funding instruments should be designed to sustain long-term projects that, although may not show immediate results, will benefit in the longer run. For example, the construction of the foundation of a house might take a lot of resources and not show immediate results, but it is crucial for the overall structure. This argues for the development of funding instruments that reflect long-term objectives.

Increasing trust between citizens and their governments regarding digital IDs in Africa is crucial. Public participation in designing these solutions is often treated as an academic exercise, leading to a low level of trust due to a connection deficiency between data and service delivery. Regulations are needed to help citizens push back and use the instruments of the law. Trust is also hampered by the politics of how everything is done. This reveals the importance of increasing trust between citizens and their governments when it comes to digital IDs in Africa.

Citizen involvement in the defining or designing solutions can drive or stop court cases implementing the system. Testing the laws and involving citizens in the process can increase trust. Policies should take account of individual rights as data rights. This highlights the need for citizen involvement in the development and implementation of digital infrastructure.

Consideration of the ‘total cost of ownership’ is critical during procurement. Shared experiences reveal challenges in terms of costs for SMS systems and infrastructure development when handing over systems to the government. It is important for the government to try out systems before making purchase decisions to avoid being locked in and facing issues later. This emphasizes the importance of considering the long-term implications of licensing at the database, middleware, or application level during government purchases.

In terms of security and data protection, the analysis advocates for preventative and curative measures for digital public good security. The Digital Public Goods Alliance is developing good practice principles, and adapting these principles can preempt issues. This supports the need for measures to ensure the security of digital public goods.

In conclusion, the analysis emphasizes the importance of a community approach, governance, long-term planning, skill development, funding instruments, trust-building, citizen involvement, and security measures in the development and maintenance of digital services and infrastructure. These insights shed light on the challenges and considerations that need to be taken into account for the successful implementation of digital services and infrastructure.

Pramod Varma

India’s approach to Digital Public Infrastructure (DPI) emphasizes the importance of civil society and citizen engagement to improve privacy and inclusion elements in DPI building. The use of civil society or citizen engagement as a supply-side tool is considered common and essential in this process. Creating one solution infrastructure and building several solutions on top of it is key for India due to its diversity and scale.

Marketplaces, NGOs, and other non-governmental organizations (NGOs) play a crucial role in the DPI ecosystem. Marketplaces are important for creating sustainability and agile innovation, while NGOs are key in addressing diverse needs, especially for vulnerable sections of society. The DPI also reduces the cost of solutioning for NGOs, making it feasible for them to develop solutions for specific sections.

A minimalistic approach is emphasized in building DPIs in India. The identity project, payment project, and credential sharing in India were built with this minimalistic principle. This approach aims to streamline processes and ensure efficiency in delivering digital public services.

Participatory governance, accountability, and dispute grievance resolution are crucial in implementing digital infrastructures. Governance plays a vital role in the effective implementation of DPI.

Resilience and redundancy are necessary aspects of digital infrastructures. India has implemented three or four payment systems for this purpose, ensuring resilience and redundancy.

Societal, political, and regulatory buy-in is necessary for the successful implementation of digital public infrastructure. Given that these DPIs impact a billion people in India, significant support and coordination from society, political leadership, and regulatory bodies are essential.

Global coordination is critical for interoperability. As people seek opportunities for work, education, and healthcare across countries, global coordination is important for seamless cross-border operations.

Support for sharing digital assets as open source goods accelerates digital innovation. The availability of Digital Public Goods (DPGs) and open-source goods contributes to rapid development and adoption of digital technologies.

A common definition and understanding of DPI have been created among many countries through G20 coordination and discussions. This has led to the development of a shared vocabulary and a common set of principles that underpin DPI.

The context of each country is unique, necessitating country-specific DPI residents. Many countries are working to create their own DPI residents to cater to their specific needs and challenges.

Efforts to share assets via the Digital Public Goods (DPG) ecosystem are supported by DPI funds. These initiatives aim to facilitate the development and sharing of digital assets, fostering collaboration and innovation.

The establishment of identity systems varies for each country, revolving around their specific context. Starting now, countries are advised to have full legal support, especially for identity systems.

Data storage in identity systems should be minimalist and secure. The identity system in India, for example, has not been breached so far, highlighting the importance of secure storage practices.

The analysis emphasizes the significance of participatory governance, the need for resilience and redundancy in digital infrastructures, and the importance of societal, political, and regulatory buy-in. It also highlights the critical role of global coordination, support for open-source sharing, and the country-specific nature of DPI implementations. These findings provide valuable insights into India’s approach to DPI.

Aishwarya Salvi

Digital Public Infrastructure (DPI) has the potential to transform governments, economies, and societies worldwide. The rapid advancement of digital technologies has significantly changed how we interact and conduct business on a global scale. Governments are now adopting various approaches to implement DPI, recognizing its crucial role in facilitating participation in society and markets.

The successful implementation of DPI requires striking a delicate balance between the diverse needs and interests of different stakeholders. This complex task involves carefully considering the expectations and demands of governmental bodies, businesses, civil society, and the general public. By effectively managing these varying perspectives, DPI can be tailored to meet the specific requirements of each stakeholder group, ensuring inclusivity, interoperability, and accountability.

International cooperation plays a vital role in fostering the creation of inclusive, interoperable, and accountable DPI. Collaboration on a global scale allows governments, policymakers, regulators, businesses, and civil society organizations to leverage shared knowledge and resources in developing DPI solutions that empower individuals and enable seamless interactions across borders. The German Federal Ministry of Digital and Transport, in conjunction with GIZ, has organized digital dialogues as a platform for direct exchange and discussions on the importance of international cooperation in developing effective DPI solutions.

These digital dialogues enable policymakers, regulators, businesses, and civil society representatives to engage in meaningful conversations, sharing lessons, ideas, and perspectives on approaches to implementing DPI. Through this collaborative effort, best practices and innovative strategies are identified, guiding countries in establishing robust DPI frameworks.

In conclusion, DPI has the potential to revolutionize governments, economies, and societies worldwide. Its successful implementation rests on finding a delicate balance between the diverse needs and interests of stakeholders. Moreover, international cooperation is crucial for fostering the creation of inclusive, interoperable, and accountable DPI. The digital dialogues organized by the German Federal Ministry of Digital and Transport in partnership with GIZ provide a valuable platform for policymakers, regulators, businesses, and civil society to exchange ideas and insights, contributing to the development of effective DPI solutions.

Moderator

The analysis focuses on the challenges faced during the digitisation process of India’s digital identity project. One of the major issues identified is the absence of a proper legal framework to govern the project. This lack of legal guidelines created uncertainties and posed challenges in implementing and regulating the digital identity system effectively.

Another significant concern is the inadequate consideration given to data protection rights. The digitisation process failed to account for the rights of individuals in relation to their personal information. This omission raises important questions regarding privacy and data security in the digital identity system.

Furthermore, the design of the system, which relied on a centralised cloud-based and cloud-stored biometric database, is notable. While this approach may have certain advantages in terms of convenience and accessibility, it also raises concerns about the security and potential misuse of personal data stored in the cloud. These issues highlight the need for a more thorough and thoughtful approach to the design and implementation of such systems.

In light of these challenges, the analysis suggests that the global community can learn from the mistakes made in India’s digital identity project. By examining these shortcomings and addressing them proactively, other countries and organisations can avoid similar pitfalls and create more robust and secure digital identity systems.

Additionally, the analysis highlights concerns about the misuse and exclusion of digital identity in infrastructure rights and governance. It argues that steps need to be taken to mitigate the potential harms associated with such misuse and to ensure that the benefits of digital identity are shared inclusively among all individuals and groups. To achieve this, the analysis recommends consulting with human rights groups and other stakeholders, as they can bring valuable insights and perspectives to the decision-making process. By including them in the consultative process, the aim is to mitigate risks and harms and ensure that the design and implementation of digital identity systems align with human rights principles.

In conclusion, the analysis underscores the importance of addressing the challenges faced in the digitisation of identity systems by considering legal frameworks, data protection rights, and the design of the systems themselves. Learning from the Indian experience and the mistakes made can benefit the global community in developing secure and inclusive digital identity solutions. Furthermore, the involvement of human rights groups and other stakeholders in the decision-making process is crucial for mitigating risks and ensuring that human rights are upheld in the digital age.

Session transcript

Aishwarya Salvi:
you you you you hello everyone, a warm welcome to you all who have joined us in this room and also to everyone who’s joined us virtually a big thank you for attending this session creating digital public infrastructure that empowers people. My name is Aishwarya Salvi, I’m an advisor at the German Agency for International Cooperation, GIZ, working in the field of digital governance and I’ll be your on-site moderator today. A brief note on the housekeeping and what is it that we planned for the session. Our session is being held in a hybrid format and it will be a roundtable discussion with an open Q&A. We highly encourage all participants to contribute to this discussion. For all participants who are joining us virtually please keep your microphones muted during the session. You are encouraged to post questions and comments in the chat box at any point of time. My colleague Torgy Walters will be monitoring the chat and fielding questions from there for our Q&A rounds. This session is organized by the German Federal Ministry of Digital and Transport together with GIZ. The ministry engages with digital dialogues with several key partner countries to ensure that we shape better framework conditions for digital transformations of our governments, economies and societies. As a multi-stakeholder initiative the digital dialogues provides a platform for direct exchange between policymakers, regulators, businesses and civil society. The goal of this session is to share lessons on approaches undertaken by countries represented on this panel in the implementation of digital public infrastructure. We all know digital technologies have drastically transformed the way we interact and transact in the world. The most notable means of digital transformation has been the development of digital public infrastructure. So what do we mean by DPI? DPI are society-wide digital capabilities that are essential to participation in the society and markets as a citizen, entrepreneur and consumer in the digital world. With the growing demand governments are now adopting different approaches to implement DPI based on the availability of resources, engagement with the private sector, interaction with the civil society and citizens and also support from international organizations. In this session we set out to understand the existing DPI ecosystem in the countries that are represented on the panel. Also we understand the steps taken by the governments to balance differing needs and interests of the stakeholders. Additionally we will use this opportunity to exchange the lessons from the DPI implementation and discuss how international cooperation can foster the creation of inclusive, interoperable and accountable DPI that empowers people. For this discussion we are joined by our esteemed panel members who have contributed extensively in the field of digital transformation. First off we have Valeria Yonin, Deputy Minister of Ministry of Digital Transformation of Ukraine, joining us from Kyiv. Valeria oversees Ukraine’s national digital literacy policy, development and growth of SMEs, entrepreneurship, regional digital transformation as well as Euro integration and international relations. Next we have Dr. Pramod Verma joining us virtually from the US. Thank you Pramod for waking up so early in the morning for us. Presently he’s serving as the CTO at AXF Foundation and co-chair at the Center for Digital Public Infrastructure. His extensive experience as a former chief architect of India’s Aadhaar program and his work with India stack layers like the eSign and digital locker makes him a prominent player in India’s digital infrastructure. Next up in this room we are joined by Mark Irura from Kenya. Mark is currently a technical advisor for the Fair Forward Artificial Intelligence for All project at GIZ. He possesses valuable expertise in data and digital system management. His previous background includes his role as the consultant at Open Institute and project manager at Development Getaway. He has extensive experience in implementing various digital initiatives in Kenya. And finally we have the dynamic Adriana Groh. She is the co-founder of Sovereign Tech Fund in Germany and former director of impactful tech projects such as the Prototype Fund at Open Knowledge Foundation and she has been a prominent figure in advancing digital sovereignty, participation as well as open digital infrastructures. A

Irina Soeffky:
round of applause for our panel members. Thank you. But before we dive into our discussion I would like to make a special mention here. As said earlier the session is organized by the German Ministry of Digital and Transport and we are joined by Miss Irina Zufke. She’s a director at the National European and International Digital Policy and I would request her to kindly give her opening remarks. Thank you. Thank you very much and welcome everybody. It’s wonderful that you’re all here and it’s a great pleasure for me to engage today in this discussion on digital public infrastructure. A very timely topic obviously and I must say that in particular the Indian G20 presidency did a great job in bringing this topic to the center of the stage and the process we learned a lot about what India has achieved in the field already which is quite impressive and I assume we will hear a bit more about that also today but obviously there are also other countries that have already done impressive projects in the field. So it’s great that we engage into further discussion today, talk a bit about lessons learned and obviously Germany is also doing its share in the field. We don’t usually call it digital public infrastructure internally. We rather talk maybe about digital public services but probably in substance we’re doing about the same thing. So we certainly do have an EID which is supposed to get smart so on your smartphone this year we are moving it in the direction of an EU idea which should then be usable within the entire European Union so these are pretty important projects and for good reason. They are central in our national digital strategy because we believe that implementing these projects is particularly important to enable digitization across different fields and branches. So another example for our national work would be and this is actually something that my ministry is doing, building an ecosystem of mobility data which we use to really make public data available but we also fuse with data that is provided by private sector players. So bringing the two together we hope will have an impact on making new business models, new options possible. But this is just examples of what we do at home. Maybe the even more spectacular thing is what we do together with partners internationally and that is not us but colleagues from the Federal Ministry of Economic Cooperation and Development and of course GIZ. That is the Gafstag Initiative which is a pretty impressive project that we also during the Indian G20 presidency talked about quite a lot. It’s all about open interoperable elements that are reusable offering them to two countries to use them to build their public infrastructure. As I said with a focus on interoperability and openness for usability which is very very important. So yeah this is maybe something that we can bring to the discussion and talk a bit about lessons learned there. But now I’m really excited to hear what what others do and I know that there are very impressive examples that we hear about. So looking forward to the discussion and the debate and very glad to be here. Thank you. Thank you so much

Aishwarya Salvi:
and we’ve seen Germany has always supported inclusive and interoperable digital services and the work that Gafstag has done has also helped other countries introduce and implement these digital services in their countries. So thank you so much. We now jump into the discussion. We have two rounds and each round will be followed by a Q&A. We have reserved three minutes for each speaker to respond to these questions in each round. I just want to reiterate that we will be strict with the time in order to allow everyone including the audience to participate in this discussion properly. So in the first round we will look at the existing DPI ecosystem. We all know that creation of DPI in several countries is a result of cross-sectoral partnerships between governments laying down the digital guardrails, the private sector providing the technical services, the civil society academia and citizens providing feedback to their services to make them more user-centric and each actor in this ecosystem has its own needs and interests. For instance, IT companies need a return on investments to be incentivized to participate in the ecosystem. We have data-driven models that drive innovation but they also raise privacy-related risks and could lead to exclusion of marginalized communities. So given this context, my question to all speakers is what role does each actor play in your country’s DPI ecosystem and how does the government strike a balance between deferring needs and interest of these stakeholders? I would first invite Dr. Pramod Varma. He’s worked extensively in India and would request him to respond to this question now. Thank you.

Pramod Varma:
Good afternoon. I hope you can hear me. Yes, we can hear you. Yeah, thank you. Thank you, Aishwarya, for setting that up. Thank you, Minister. There are two parts to your question and let me clarify a little bit of the difference in how India is looking at these ecosystems. There are supply-side ecosystems. Supply-side means building DPI. To build DPI, who is supporting you? Is the private sector supporting you? Is civil society engaging with you? I think that’s what you alluded to, Aishwarya, when you mentioned the two ecosystems are joining in. But there is also a โ€“ and that’s been done in most of the e-governance projects in the last two decades or maybe even more. You know, especially many of the countries use private sector to supplement the capacity of the government and get it done. So, IT services and other private sector services participating in the supply side, that is towards the build of the DPI, is very common. And India is no different, frankly, in that. Use of civil society or citizen engagement also as a supply-side tool to improve the privacy elements, inclusion elements, the desirability of that particular project is very, very key. And that is also essential in building up anything that is infrastructure in nature. So, DPIs, by definition, are not full solutions and they are just infrastructure in nature. So, that’s key. But there’s a significant difference in India’s approach to DPI. It’s also the demand-side usage. That’s very different. That is where you put out something like GPS as a building block, as a digital public infrastructure. And private sector innovation is innovating market solutions. These are not IT services company helping you build it. It’s a use of DPI is where the ecosystem is very key, not building of the DPI. Use of DPI, we believe, India believes, once the DPI is architected well, interoperable, minimal, I would actually put minimalism as one of the most important principles as well, interoperable, like GPS, think very minimal. All it does is very, very little. But combining and using these DPIs, market, civil society, like NGOs, and even government can build layered set of solutions, like the way solutions on the internet, that actually reaches the large population. And this is key for India, because India’s diversity and scale is enormous, 1.4 billion people, 22 official languages, but hundreds of languages. It’s like a continent by itself. So, it’s creating one solution for you or one solution for Africa, for example, it’s a continent, so a lot of people. So, different culture, different society, different context. So, one solution is not what we are after. One infrastructure, what is what we are after, many, many solutions on top of the infrastructure. So, think of internet, many solutions on internet. Think of GPS, many solutions combining GPS. So, think of infrastructure as a means to create minimal, interoperable building blocks that is left now opened up for the demand side ecosystem, which is market ecosystem, civil society ecosystem, and government. Even government can innovate, can innovation system, who creates very contextual solutions to those people. And in this market is very key, because market creates sustainability and creates very, very agile innovation, unlike government trying to do everything. So, market is key for us and UPI, Unified Payment Interface, is a classic example, where multiple unicorns and multiple, multiple large countries, like including Google Pay, plays out, but interoperability is, we have mentioned, no monopolization or colonization of that sort, right? It is infrastructure is open and interoperable. And, but NGOs are even, even key for us, because India diversity necessitates a long tail. The last long tail solutioning is very hard, solutioning for the small section. For example, a very small vulnerable section in a tribal sector is very hard, because cost of developing solutions are very high. And so, DPI is also bring down the cost of solutioning, which is what happened with digital ID, digital payments, and digital paperless interactions. It dramatically reduces the cost of solutioning and for NGOs as well. So, we believe demand side ecosystem is more important for the DPI, Aishwarya, than the supply side ecosystem, because that, that creates sustainability of solutioning and diversity of solutioning. Thank you. Thank you so much, Pramod. I think in India, it’s unique to see how the community got involved in this ecosystem, how the uptake was higher because Because everyone, right, in the remotest villages were able to get a phone to access these services on just one device.

Aishwarya Salvi:
So I think that’s unique to India. Moving next into the room, I would request Mark to give his response. Thank you. Thanks.

Mark Irura:
To add on to what’s been shared already, the supply and the demand side were mentioned. And on the supply side, we have actors such as funders, actors such as government, sometimes even private sector, and civil society. They’re trying to create something, to build something. And the way it’s been portrayed as well is that we have people who sit in the middle there, still on the, almost on the demand side, because they are waiting on this package. They are waiting on a payment system to leverage it to deliver a service. And then we have users. Users who do not care, they do not care about digital government. They know government. So they want a service. So if I could speak to Kenya previously, before we moved on to like one e-government platform, it was management information systems being led across various departments. So if you wanted to register a business, you go to one office, you fill in a form, you wait a couple of days, you go back. So when they digitized or automated, you still had to go to that office, and then you were sent to another office, even though they had a system. But now everything has kind of been centralized a little bit. But we are still finding a challenge, because there is no community in between the demand and the supply to be able to innovate around packages of reusable, interoperable components. So because of that, there is no longer term view about these digital services. What do they look like? If payments, if today for some reason the payment platform for government goes down, what’s the impact on the economy? If today, for example, there’s an outage for 30 minutes that businesses cannot be registered. And so I think to add on to what’s already been shared is the thinking that we have system integrators who sit in between. System integrators are startups. They have been mentioned, startups, or even tech companies who are able to latch on to what is already existing and build upon it. But there are some things that government or even funders, they cannot take away. So let me give an example of a responsibility government cannot delegate. The regulation aspect or the vision or the foresight. What does this platform look like 10 years from now? Because whatever we are building now in two or three years will be a legacy system. So what will that look like? Who will continue to maintain it and sustain it? Who will pay for it? So it’s if we have a community approach, then we don’t just think in terms of open source in terms of open source free beer. But it’s open source around how can this developer community not resent putting in their intellect and their energy into building these services and building them over the long term.

Aishwarya Salvi:
Thank you. Thank you so much, Mark. Yes, I think the concept of system integrators is very important because, as you mentioned, the government needs to look at governance and the regulations that they need to lay down. But when we talk about operational and management issues, we need these startups and companies to get more involved in the economy and in the ecosystem, to do the daily repair work or the maintenance of these services. So thank you so much, Mark, for your response. We now move to Adriana. I would invite her to share her experience, what’s happening in Germany, how do we balance these needs of stakeholders?

Adriana Groh:
Thank you. So now I’m stretching the definition and, there while, the topic we’re talking about a little bit with the work that we’re doing with the Sovereign Tech Fund in Germany. But we’re not limited to software that is developed in Germany. So maybe a few words about this, so you understand how I’m stretching the definition of DPI that we’re using right now. The Sovereign Tech Fund supports open digital-based technologies, that’s how we call them, to not use digital infrastructure again, because otherwise the term just gets really bloated. And what I mean by that, to put it simple, is software that developers use to develop software. And we, not speaking for this room maybe, but most people don’t think about that, although it’s very necessary. This software is very critical and very vulnerable, and if it breaks, it scales massively through everything that we’re using every day. But it’s invisible for many people who are not software developers, because you’re just using the interface, but there’s a lot behind that. We’ve seen with Heartbleed way back, but also with Log4J, how it impacts everyone, basically, when it breaks. And the Sovereign Tech Fund’s mission now is to, well, probably won’t be able to prevent it forever, but to work on it and increase the awareness a bit for this layer of the software stack. So what I mean then, by stretching or complementing the DPI approaches we already heard, is to, well, basically saying, look a bit deeper. Because everything we build relies on software that is running in the background, and that community and software developers also in companies and businesses need. I have some numbers, they’re all very, very terrible, but I’m just going to say maybe one there, like 64% of the 133 mostly widely used software components that everyone relies on are in a very critical shape and only maintained by a handful of people. Can be two, can be three. They are doing this, nobody notices, most people don’t notice. It’s a community of very intrinsically motivated people, some of them work for companies, but most of them do this critical work in their free time. So what we need to do is to develop a more holistic approach when we talk about DPI, in a way that we need to secure the foundations, innovate, and maintain. It needs to be the whole life cycle. I think people in this room know about this, but because this work is also very, not very thankful, it’s a little bit like the road you take every day to work. You don’t think about that road until it’s blocked or broken, and there’s long maintenance work and then you’re really annoyed. But if it’s just working, it’s just there. That’s the same for the layer, the focus of the Sovereign Tech Fund, and so if that’s not working, all the great missions we heard just about are also not working. So that is my short pitch. I’m really looking forward, if we’re opening up the room for the discussion, because it’s a particular topic, the production logic, we also heard about this, is different in this field. So I mentioned the intrinsic motivation of many people. It’s also a very old legacy, so to speak, and our whole very successful global digital economy relies on this software running. The whole world relies on software, of open source software actually, running in the background, being maintained, being available. One of the key reasons why we’re innovative, why we have competition, why we have startups and SMEs. So it’s a really important topic for civil society, for governments, and for companies worldwide, and if we manage to have this holistic approach, I think that’s going to really get us far, and also secure us, everyone, in a position to act in the future. Because if the roots are not well maintained, then the growth will not be long-lasting.

Aishwarya Salvi:
Thank you. Thank you. I think you’re absolutely right. We need to stretch the definition of DPI, because when we talk about infrastructure, the mentality is, is it hardware, just hardware? But it’s not. It’s also software, and as you rightly mentioned, the entire economy relies on these softwares. So, yeah, we should include software as well in the definition when we talk about DPIs. We now move online, and I would request Valeria to kindly respond to this question. Yes, good afternoon, dear colleagues.

Valeriya Ionan:
So I would like to echo, in some ways, the previous speakers. Well, we believe in golden triangle of relations, government, private sector, and civil society, but we think it’s not about building the ecosystem. It’s about, first of all, creating conditions that enable all participants to work efficiently. So instead of discussing the ecosystem stakeholders, which are, to my mind, more or less similar in all of the countries, I would like to concentrate on several concrete examples, which we have in Ukraine. So just for the content, for the context, in Ukraine, we have our state super app, DIA, which is used by 19.5 million of users with digital documents, digital services, and digital signature. And even before the full-scale Russian invasion to Ukraine, Ukrainians already have been able to pay fines, to pay taxes through DIA, or to use DIA for digital documents. But DIA is not only about digital documents and online government services. We are also digitizing the workflow of both public and private sectors. We use such features as document sharing, validation, and DIA signature to speed up document flow and customer service, and replace paperwork with digital and intuitive services that reduce costs and save time. So let’s say some organization can receive electronic copies of digital documents of DIA users using a sharing scenario. Through validation, companies can check the digital document validity just in two clicks, for example, in stores, post offices, or governmental institutions. So just as an example, the financial sector in Ukraine is one of many industries that most actively uses DIA services. 59 banks have already integrated sharing and DIA signature into their processes. This allows them to conduct quick customer identification and verification, open a bank account without visiting a physical branch, verify a customer when working with payment terminals, etc. So one of the most popular banks in Ukraine, Monobank, registers customers using a sharing scenario, and the record registration for this is 99 seconds. Also, one of the banks had around 80K open bank accounts per day because of basically the possibility of DIA signature opening bank accounts online. Another great example is our project DIA Education, which is a national edutainment platform for rescuing digital literacy, and the majority of content is created together with private sector or civil society. And that’s really great because that helps us to, you know, I would say, enable our citizens with new knowledge and expertise, which is really needed on the market. Another great example is our partnership with private sector. So when the full-scale Russian invasion started, we have been able to create fast an app, which is called AI Alert or AI Alarm, that sends alerts about missile attacks. And we have basically a lot of other examples where government is communicating and working really efficiently with private sector companies, with startups, with civil society. We have this fast track of communication, and we think that that’s exactly the way how modern governments should operate. They should be working like IT companies more. They should be more agile and flexible. So what we are doing here in Ukraine, we are basically changing with such solutions as DIA the way how government communicates with the citizens. So with that, DIA really became a love mark for Ukrainian citizens.

Aishwarya Salvi:
Thank you. Thank you, Valeria. We now open the floor for questions. We would take one or two questions if anyone in the room has any questions to the panelists. Otherwise, we can move it to the end of this discussion. All right. So moving to the next round, thank you, everyone. In the next round, we basically look at what role does international cooperation have in this ecosystem? So considering the diverse approaches have been undertaken to implement DPI, it’s still an evolving concept. And there is still so much we don’t know, so much we need to do. And we also see a lot of countries are struggling to implement DPI because of limited technical capabilities and financial resources. So my question to all speakers is, what lessons did you learn in implementing DPI in your country? And how can international cooperation be leveraged to build interoperable, inclusive DPI that empowers people? I would first request Mark to respond to this.

Mark Irura:
Thank you. Thanks for the question. So I’ll begin by agreeing with what has been said by Valeria about governance. So that is very important. So in Kenya, we have this platform, eCitizen, that has been in use. But then it’s beginning to come under, not scrutiny, it’s beginning to be tested, its limits. And part of it is because there are governance issues that cannot be solved by technology. And there are technology issues that cannot be solved by governance necessarily. And I think on the governance side, there is need to look at it a little bit more critically. In what ways? So for example, how do we have a very robust infrastructure to deliver these services? And how do we begin to look at a community around it? So that’s one. As a funder, and those who are in the room, we have to think about this in a longer term view than we do right now. Because a lot of times there is pressure to show results. Sometimes we don’t want to accept we are failing. But it is important to show, to think a bit about it in a little bit of a longer term view. Because if you’re talking about governance and governance of data, so what does it mean now? Yes, it’s very convenient that at the proverbial click of a button, I can log in and do something in five minutes. But data has been shared across multiple agencies without me being aware. What does that mean? And what does that mean when I’m aggrieved, when I want to complain? So we have to think about that, and that takes time. Then number two, we also have to think about the technology and the economics of the technology. Because there’s something for government when they say, we want to lower the total cost of ownership of this technology. We don’t want to pay recurrent licenses because we cannot afford, and that is valid. So do they have the skills to procure digital public goods? So that’s another consideration, just building that capability. And then that takes time. So as funders, are we looking at that and looking at that? Do we understand it even as we speak to it? And then, I think, lastly, I will just mention that… we have to create maybe funding instruments that like look at it and maybe collaborate with others so that we look at it as we are putting the foundation of a house. When you’re putting the foundation of a house, and I’m speaking about Kenya now, you sink a lot of money in the ground, and you do not see results. But once you kind of get the foundation is done, then you can have a lot of progress when you’re constructing. And people will come around and see something. But for a long time, you’re just under the ground, like just putting in money. And people could not see what you’re trying to do. But then if you have a longer-term view, then you have a robust infrastructure that maybe continues to be relevant even when some of the technologies become obsolete in seven to eight years. But those tools are still in use. Thank you, Mark. I think we all agree that there is a need for long-term planning because these technologies are fast-changing. And as government, as private sector, and even communities, we need to keep up pace with these dynamic technologies. Thank you, Mark.

Aishwarya Salvi:
I would now request Valeria to respond to the question. How do you think international cooperation can be leveraged to build interoperability PI? Yes, thank you for the question.

Valeriya Ionan:
So first of all, it’s important to say that all governments are facing the same challenges, especially when it comes to digital transformation. So it’s, again, digital services, digital literacy, interoperability, cybersecurity, now AI, and many, many others. And there is absolutely no need to waste a lot of time in order to find the solution to some problems if the solution already exists and operates efficiently. And it’s not just about the concrete technical products or technical solutions. Like, for example, in Ukraine, we are learning a lot from Estonia. Estonia has been our mentor in digital transformation. We are using a lot of Estonian GovTech products, including X-Road for interoperability, which in Ukraine is called Trambita. But now with our DIA and DIA ecosystem, we also have a lot of achievements. And we are ready to share our experience with the world. So what I’m trying to say is that the world should be more aligned when it comes to the questions of digital transformation and understand all the existing solutions in order not to waste and to optimize time and to use those. And also, it’s not about just products, not about just solutions. It’s also about the experience in some soft questions. For example, in Ukraine in 2020, we have created a new position in Ukrainian government, which is called CDTO, Chief Digital Transformation Officer. So these people operate on the level of deputy ministers or deputy governors. And for today, we have CDTOs in every ministry, in every governmental agency, and in regional councils. So that basically gives us a possibility to move fast with all digital reforms in different spheres and on different levels. And we know that when it comes to such kind of organizational structure that in other governments, in other countries, the organizational structure is, I would say, slightly different. But what we see is that especially this organizational structure was one of our success cases, which helped us to make a huge leap in digital transformation just in three years. And if we would have a good platform for communication between government, civil society, private sector, where we could share not just products, which obviously could be open sourced or not, but also could share such kind of experience, I think that this is something important, at least, to elaborate on. Also, there is no great, I would say, academic or non-academic programs which really prepare people to become Chief Digital Transformation Officers for their governments. So there are some high-level strategy or leadership courses. But when it comes to finding some concrete solutions, I think the new world-class education for digital leaders from all over the world, which will give not just knowledge and expertise, but also the possibility of a regular unofficial even networking, would improve a lot and give a lot of new possibilities. So of course, we can speak in this panel and in this question a lot about interoperability, lots of different technical solutions. But I believe this information is more or less available on the web. And that’s why I think my main message here is that we have to focus more on communication, on networking, on finding more points for cooperation between our countries and different institutions. Thank you. Thank you, Valeria. I think all governments need to make drastic institutional changes, create positions like the Digital Transformation Officers that Valeria mentioned. Because these people can reach out to the citizens, build the capacities and also ensure that each citizen uses these digital services.

Adriana Groh:
Thank you, Valeria. I would next ask Adriana to respond. Thank you. Thanks. Yeah, I agree also with what has been said. I think it’s important to stress that we need to be in a position where we can do the sharing and learning together. We don’t need to reinvent the wheel. Sometimes it’s good to have similar tools running at the same time and test which one works better and then plug and play a little bit. So it’s necessary, I think, to stress that, yeah, public money, public code has been heard before, I guess. But you need to be able to share, adapt and change software that is around if you really want to push it to the maximum, this learning and sharing approach. Coming back to the focus of the Sovereign Tech Fund, those digital-based technologies, it’s a bit different. Because maybe you do want redundancy. You want to have two or three things that do the same job running at the same time. Because coming back to the road analogy, if you only have that one road that is then blocked that one day, what are you going to do? So it’s maybe less about finding that one solution, share it, adapt it to your specific needs. It’s maybe about deliberately seeing where we need redundancy and how to maintain it. And coming back to the international cooperation, this is a global digital common. There are no geographic boundaries around those parts of the software that we’re talking about. They’re used in all different kinds of contexts everywhere. So it is particularly important to be well-coordinated here. Because what could happen is that, with all the good intentions that we have, is ripping this ecosystem of the very foundations that we all rely on apart. Because you’re not coordinated. You’re pushing and pulling in different directions. You can also not fix it by just throwing money at it. There needs to be a strategy. There needs to be community that advises you. There needs to be engagement from the private and the public sector. So if we’re not doing it together, it’s not going to work. So it’s not a nice-to-have, it’s a real must-have to be well-coordinated and understand that for this digital public common, we need to fix also the tragedy of the commons. Because I think right now what we have is everyone relies on it, but nobody feels responsible for it. So this is, I think, our exercise for everyone to analyze this, to then come up with solutions, and then sit down together and really implement that. And not do it in our little boxes, but really from the very, like, day one, do it together. Thanks.

Pramod Varma:
I would now request Pramod Verma to respond to the question. Yeah, I think many of these best practices and learnings have been actually shared quite widely, and it’s available on various papers, you know, writings and talks. But I’ll give you maybe three different parts of at least the learning we went through. One, when you build DPIs, at least for us, we were looking at just one component of that that allows a lot more innovation to happen. That’s why I keep using the GPS analogy. It is not about thinking through the whole solutions. We are letting the market and society and other parts of the government and so on to put together solutions later. So for others to build solutions, what do we need to build? And that was a real question we were asking. So hence, minimalism was a big principle that we kept playing out. If you look at our identity project or a payment project or credential sharing, and like in Ukraine, the paperless ideas, the paperless workflows. We wouldn’t build the workflows, but we built the credentialing infrastructure that allows many, many workflows to be built. So minimalism was very, very essential. Interoperability, of course. Decentralization. And India is very diverse and we have the federal hierarchy between the center and the states and quite a lot of autonomy spread across many different parts of the system. So centralizing as an architecture or design never works out. It never gets implemented well. It’s also good for privacy. So decentralization as a design principle is very key. And of course, thinking through privacy and cybersecurity for any digitization is very key. This is sort of the technology principles, but on the governance, I think very good comments were made in the panel. Policy interventions are necessary. Creating a participatory governance, especially if this highway or this road is used by many, many people. How does the governance of the road itself work? Because many people are going to depend on it. Marketplace and others are going to depend on it. So participatory governance, accountability, dispute grievance resolution. My colleague talked about that. It’s importance thereof because something will always go wrong. And if things go wrong, what are the process of addressing that wrong is very, very key. And most importantly, resilience. I think the redundancy topic was key. Resilience in India is not one payment system. We have three or four payment systems that seem to do somewhat similar things, but this is actually a good thing because depending on the entire system for one digital building block is key because if you get attacked or down for some reason, the entire system can come down. So resilience and redundancy is very key. But one more learning, other learnings we had, non-technical, non-participant, non-governance learning is also regulatory, political, and social, societal buy-in. Many of these DPIs touch, at least for India, most of them touch a billion people, and that requires significant buy-in from society, political leadership, and regulatory leadership, and most importantly, market incentive alignment. Market, why should they use the DPI? Can they create a closed loop or a walled garden, a monopoly? They would all want to create those private solutions that are locking the users and locking the country, but what is their incentive in playing interoperability? So I think there’s a lot more discussion that needs to be done to get those buy-in, especially if you’re implementing at scale, whole country scale, then it’s very key to get the alignment. And on the global coordination, I mean, this is a no-brainer, frankly, global coordination, because as somebody mentioned, there’s no border to people’s aspirations. People’s aspirations are not limited to borders, geographical borders. Now, people want to go across the countries. They want to work across economic opportunities, education opportunities, healthcare needs. People travel and go across. So it’s discussing interoperability and portability of my data and credentials so that I can continue to use as a citizen. I can continue to use my data instead of depending on large systems to coordinate is very key. And we saw that with vaccine certificate in the COVID time. It was essential that we allow people to move around with the vaccine certificate. So interoperability, sharing of learning, and I’ll also add sharing of assets. I think most of the panel said our assets are now available as DPGs, so digital public goods, open source goods. I think sharing of assets of what we are building with others also help accelerate this journey.

Aishwarya Salvi:
Thank you. Thank you, Pramod. So do we have any questions from the chat? Also looking into the room, of course, if there are questions in the room, happy to take them as well. We have one microphone in the middle. Please line up. But thank you for lining up already. As Mr. Abdiaziz Ahmed posed a question quite a while ago in the chat, I would read this out first. It goes towards Africa. And so I look to Mark. He might have a response on this. Question goes, how can we increase trust of citizens with their governments, especially when it comes to digital IDs in Africa? I go for it? Okay.

Mark Irura:
So there’s probably four things. Part of it has been mentioned by the last speaker. So we have people, we have processes, we have the product that you want to sell, and we have politics. And it’s things that are taught when you’re developing IT solutions in terms of maturity models and managing change. So I think one of the solutions is we of course talk about the citizen at the center or human-centered design. It might be challenging or difficult to do it with very many people. But I believe one of the things that can drive or stop these court cases, you know, you implement a system and people go to court to stop it, is just because anchored in the processes like how do we treat with my individual rights as data rights? And the citizen is often left out of, you know, public participation is an academic exercise. So I think having the regulations is good because it helps citizens to push back and use the instruments of the law. And that’s a good thing. And I think then we are kind of testing the laws and seeing how best to involve citizen in defining or designing these solutions. And I think that might be part of the issue right now in just, you know, the low level of trust. If I sign up today, how does it translate to a public service being delivered to me? And that connection between data and water or electricity is not direct, but it’s also because there’s a trust deficit in the politics of how everything is done. Thank you so much, Mark. I will not do what most moderators do and kind of repeat the gist of what has been said because we were warned we only have a few minutes left and still many questions in the chat and in the room. And we take the first question from the floor.

Aishwarya Salvi:
Please, Leah. Yes, thank you so much.

Audience:
My name is Leah Gimpel. I’m from the Digital Public Goods Alliance. And I really love that we speak so much about sharing technologies and open source here in the session. However, we work a lot with countries, right? And I think we discussed here already that it’s not only about technology being available. It’s also about the governance and in general, an approach, DPI is an approach much more than technology. And what we hear a lot from countries is that they’re… afraid of making the same mistakes again that they did in the past. So, what I mean by that is that we, what we find is that there are coordination problems within governments, and there is turf wars, and people are not working together on the same thing, right? And in that sense, I’m wondering how we can instill this DPI mindset, really, in people. So, I’m very much with Pramut who’s saying it’s about minimalism, it’s about starting with a use case, and building it in a way that others can plug into it. But how do we instill this DPI mindset in people, apart from champions such as Amado here in the room, who is a champion in his country, building an export implementation. But we need more of those people, right? So, how do we get the message across the policymakers and leadership? Thank you. Thank you, Lea. I’m looking to the panel. So, I’m going to take the question up. Yeah, please go ahead. Do you want me to quickly answer?

Aishwarya Salvi:
Yeah.

Pramod Varma:
So, I’ll give some perspective on at least a few things that we are trying. One, I think the minister mentioned about G20 coordination and discussion. So, many of the countries came together on a shared definition, understanding of what DPI is. It’s just a vocabulary, as everybody’s been doing this. But there’s a common vocabulary that was created, and common set of principles were laid out. And they said this is important as the digital economy gets developed in many, many countries in the next 10, 15 years, and so on. So, how do we help every country create their own digital rails that allow their own digital economy to get pushed, and we talked about GALSTACK and many other efforts that are going on. So, from the people perspective, I think the journey has begun, and many of the discussions are happening. But the one thing, in addition, we are also doing is that there are DPI funds now. DPGA, of course, continue to support the sharing of assets via DPG ecosystem. And we also just started, I’m a co-chair at the Center for DPI. We just started a Center for DPI as a pro bono effort to spread, create DPI fellows and DPI residents around the world. So, we are creating proper training certification, both certification for policymakers and certification for actual implementers. These are sort of bootcamp-ish things that you go through to actually build. So, we are working with 21 countries, at least for now, to create their own DPI residents in their own country, because contexts are very different, and everybody needs to think through their own context in their own country. So, there are some efforts that are going on, but I think it needs to get accelerated. So, maybe more panels like this, more efforts, events like this, more training and support systems like this can actually be useful to bring it together,

Mark Irura:
bring a common understanding together. Thank you. Thank you so much, Pramod. I think Mark, you also wanted to react to the question, right? Yeah, very quickly. So, the issue of total cost of ownership, like during procurement, I was once in one of my previous roles, I was once in Malawi and a system had been put in place that was able to transmit some results to patients. So, when it was time now to hand over to government, they were like, do we put in infrastructure like hospitals in bed or we pay for these SMSs? So, from the start, there was a lack of understanding of procurement and what it implies to put this tool, because how do you go back to the taxpayers and say we bought SMSs? So, I think it is important to consider what it means. So, do we license that database at middleware or at application level? And what does it mean over the long term? So, I’ll just add that to the response that was given. For government to try out without burning their fingers and being locked in and having to go back to parliament and say we bought a license and it costs X amount of dollars and, you know, there’s a problem with that.

Moderator:
Thank you so much, Mark. We have another question on the floor.

Audience:
Thank you so much. My name is Ramanjit Singh Cheema. I’m Senior International Counsel and Asia Pacific Policy Director with Access Now, the International Digital Rights Organization. And my first, and it’s a two-part question. So, the first question and comment is to promote online and then my second part is to the panel. The first one is to promote and there’s been a lot of discussion around digitization and learning from lessons from the past. So, I’m just curious for this group, wouldn’t it also be useful for the global community engaged in this conversation to learn from the Indian experience and mistakes? For example, having a digital identity project out that didn’t have a legal framework, that did not account for data protection rights, that in fact disputed with the privacy was a fundamental right. But also most importantly, the very design concept and I know you in particular promote extensive experience in this around the design of the system, namely a centralized cloud-based, cloud-stored biometric database. Would that be something that’s good for other countries to adapt to learn from? And the second question therefore, the second part of the question to the panel is given infrastructure has rights and governance concerns, what steps has the DPGA or this community around digital public goods taken to mitigate harms around digital identity, misuse, exclusion, specifically have them in consultation with human rights groups or others around lessons from digital identity experiences in India, in Kenya, in the Philippines and elsewhere? And how do human rights groups be baked into this consultative process?

Moderator:
Thank you. Thank you so much. I first look over to Pramod. Do you want to react? Yeah. So, yeah, I think much of these learnings have been actually again documented.

Pramod Varma:
We don’t have to, it’s especially with the identity story. It’s actually 12 years, 13 years now into the system. It was done with full executive support, parliament approval, budgets, all the regulations. I agree. The laws could have been, law could have been done earlier. So, if countries starting today should definitely look at a full legal support, especially for identity. Identity is a sensitive topic today, but unlike things like payment or anything else that will get laid about that. So, but every country has their own journey, and those journeys are in the context of that country. I think we did have existing laws that supported, that deferred, and subsequently, you know, laid the special purpose law for the identity itself. The cloud part is actually wrong. You are wrong about the cloud part, because any unique attestation, any unique identity attestation necessitates the uniqueness part of the you, you part of the uniqueness need to come, and that requires most of the national ID project, even in Germany or anywhere else, have identity database or social, you know, security or anything else. Now, maybe in the future, there might be technologies, I’m not aware at this time, that can actually do a uniqueness attestation without storing the previous data. So, that means there’s some storage of data, but minimal, it has to be minimalist. It has to be secure. Identity system never been breached by the central system, never been breached so far. There were obviously, on the edges, incorrect usages and data leaks that has happened, unfortunately, but it is not the central storage that actually worries me. It’s the governance around it, security around it, but there’s a purpose necessitates the storage of data it needs to store, and needs to store a minimum set of data. So, I think that fundamentally is not an architecture issue at all.

Aishwarya Salvi:
It’s not a design issue.

Pramod Varma:
It’s how every identity system would play out. Of course, the question can be, how is it protected? How is it used, or how do we make sure it’s not misused and so on? So, these are important questions, and much of those learnings have been documented. So, I think countries will have their own context. 2020, 10, 2015, 2023, when you implement, or 2030, when you implement, new technologies can be leveraged to create, you know, relook at some of these design constructs.

Aishwarya Salvi:
Thank you. Thank you, Pramod.

Moderator:
I’ve already seen the sign that our time is up, unfortunately. There was a second part to the question, which was about which measures are being taken that digital public goods are actually secure and protect the data of citizens. Is there a very brief reaction from the panel on this question? Also looking to Valeria and Pramod. No? Mark, please.

Mark Irura:
So, there’s preventative and there’s curative measures. Of course, we run to the law when it’s curative, but I think preventative, there’s work that’s being done by, say, the Digital Public Goods Alliance, and they are coming up with, like, these are practices, these are good practices that we can adopt, and we take them as principles, and the reason we do that is you are preempting an issue happening by just following this set of practices that have been done. So, I’d offer that as a response, but I think Pramod really did talk about it. I’m just going to say one sentence. I think it’s going to make, like, in general, everyone safer if we understand that we need to support an open, available, and secure ecosystem of digital infrastructure components, because that’s where a lot of security issues also arise. So, we should understand it as a public’s job

Moderator:
to invest with public money in that area as well. Thanks. Thank you for that statement. I know we are over time. I would still take one very short, very, very last question from the floor.

Audience:
Okay, hi. I was going to frame this as a question, but I think I’ll leave it as a comment, and the panel can choose to react or not. I was wondering if you’d think about taking a bottom-up approach to the redressal mechanism to a lot of DBIs, right? We have seen in India that the failure rate of identity system can be really high, and that affects the public welfare delivery system to a lot extent. So, maybe thinking about providing the user with a choice if the digitization or the digital aspect of the verification mechanism does not work, they can go to a person who can help them out, because we’re also dealing with a low level of digital literacy here in a lot of countries. So, thinking about those redressal mechanisms particularly, and then also giving the user a particular level of choice where they can deal with these systems in terms of failures would be interesting. And I wanted to know what DBIs in your respective areas are doing around this, but I guess you can choose to react. Thank you. Thank you so much. Any spontaneous, very short reaction? Yes, Valeria, please.

Valeriya Ionan:
Yes, thank you for the question. So, obviously, when it comes to digital transformation, it’s important to see everything as a one system. In Ukraine, for example, we have projects which are working simultaneously. For example, we have a national program on the development of digital literacy. So, everyone who would like to increase their knowledge of digital literacy, they could do it either online or offline in a special digital centers where there is a gadget and internet connection and a facilitator who can facilitate the first contact between a person and a gadget or the platform. But when it comes, for example, to digital identity and to DIA app, which is our state’s super app and which has 14 digital documents, and I would use this opportunity to remind that Ukraine is the first country in the world where digital passports are totally equivalent to paper or plastic ones. So, for example, DIA does not store any personal data. DIA uses the approach data in transit and connects directly to the high secured state registers and shares basically shows this data. So, that’s a really good question, but probably there is no short and easy answer to that. When it comes to digital transformation in government or in country, to my mind, the most important thing is the vision. In Ukraine, we are building the most convenient digital country in the world. That’s why we are created user-centric and human-centric services and products. And when there is a need to create something new or service, product, whatever, we ask ourselves whether this really brings us closer to our vision, to the most convenient digital country in the world. It’s impossible to build it if you don’t have, if you have a basic level of digital literacy in your country. So, it means that you have to do a lot of measures in this regards. It’s impossible to build the most convenient digital country if you will not have digital services which are available for everyone and which are inclusive. It’s impossible to build digital country if you will not have a digital economy, which is working. It’s impossible if government will not have a specific person who will be responsible for digital transformation in their own sphere and in their level, like national level or regional level. So, the really great question, but I think it’s a topic for probably separate discussion. Thank you.

Aishwarya Salvi:
Thank you so much. I would quickly summarize the entire discussion for our audience. So, sorry to keep you waiting. And then, so in this discussion, largely we need to understand what do we mean by DPI. And when we look at this concept, we need a holistic approach, not just include the hardware, but also the software, because again, there are no boundaries. We also need to look at the demand side. We need to see what the community needs and how they can participate in ensuring that these DPIs are built that are safe and are user-centric. From the government side, we need to make drastic institutional changes, have data officers who are responsible for ensuring that the citizens are using these services and their grievance redressal is in place. So, just a quick summary here, and thank you so much to the audience and our speakers, especially to Pramod, who’s woken up so early in the morning for us. Thank you so much. And thank you, Reena, for joining us in this discussion.

Adriana Groh

Speech speed

162 words per minute

Speech length

1237 words

Speech time

459 secs

Aishwarya Salvi

Speech speed

154 words per minute

Speech length

1864 words

Speech time

728 secs

Audience

Speech speed

205 words per minute

Speech length

823 words

Speech time

241 secs

Irina Soeffky

Speech speed

164 words per minute

Speech length

593 words

Speech time

218 secs

Mark Irura

Speech speed

156 words per minute

Speech length

1956 words

Speech time

753 secs

Moderator

Speech speed

167 words per minute

Speech length

165 words

Speech time

59 secs

Pramod Varma

Speech speed

158 words per minute

Speech length

2517 words

Speech time

956 secs

Valeriya Ionan

Speech speed

164 words per minute

Speech length

1732 words

Speech time

635 secs

Child participation online: policymaking with children | IGF 2023 Open Forum #86

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The articles analysed cover a range of topics related to youth engagement and online safety. One article explores the effectiveness and necessity of age verification systems online. It discusses Marie Stella’s investigation into the opinions of youth and children regarding age verification. Stella found that, while adults haven’t found a perfect solution, the issue still needs attention.

Another article focuses on the opinions of young people regarding age verification. It raises the question of whether awareness-raising education alone is enough to prevent access to harmful online content. This article emphasises the importance of further examination and dialogue within the context of quality education and strong institutions.

The third article highlights the importance of engaging companies in child participation, particularly in areas with restricted democratic participation. It discusses how companies can contribute to the design and decision-making processes that affect children. While Microsoft is suggested as a potential partner, other companies are also encouraged to get involved. This article emphasises the role of SDG 16 (Peace, Justice and Strong Institutions) and SDG 8 (Decent Work and Economic Growth) in promoting child participation.

The fourth article stresses the need to develop convincing strategies to engage companies in child participation. It emphasises the importance of partnership for the goals and industry innovation and infrastructure, as outlined in SDG 17 and SDG 9. Microsoft is suggested as a possible partner, but other companies are also welcomed.

In conclusion, the articles highlight the importance of addressing these issues to ensure the online safety and well-being of young people. They emphasise the need to explore effective age verification systems, consider youth opinions, and promote awareness-raising education. Engaging companies in child participation and developing convincing strategies are also seen as vital. These discussions align with various Sustainable Development Goals, such as SDG 4 (Quality Education), SDG 16 (Peace, Justice and Strong Institutions), SDG 8 (Decent Work and Economic Growth), SDG 17 (Partnership for the Goals), and SDG 9 (Industry, Innovation and Infrastructure).

Courtney Gregoire

The digital environment, although not originally designed for children, has a significant impact on their rights and potential. The policies of technology providers play a crucial role in shaping this impact. It is important to transition from a mode of protection to empowering youth voices. For example, Microsoft has a long-standing commitment to children’s online safety. They recognize the need to understand how children use technology in order to better design it. One way they have addressed this is through their gaming platform, where they introduced ‘Home Sweet Hmm’ to promote online safety.

The argument put forth is that children learn through play, highlighting the role of educational gaming in their development. Microsoft’s ownership of a gaming platform further emphasizes their involvement in promoting learning through play and fostering a safe digital environment for children.

Regarding product development, it is crucial to engage children in the process. Microsoft has convened three councils for digital good, where children aged 13 to 17 have provided valuable feedback on their services and apps. This demonstrates Microsoft’s commitment to involving children and incorporating their perspectives in the development of their products.

The potential of Artificial Intelligence (AI) is also highlighted, particularly its positive impact when used responsibly. The argument suggests that AI has the ability to do good, though responsible use is key to ensuring positive outcomes.

The summary also emphasizes the importance of incorporating children’s online behavior into policy-making. This reflects the need to understand how children ask for help online and to consider their experiences when shaping policies related to child safety and well-being.

Microsoft’s approach to child participation is noted, as they leverage existing organizations to engage children in product safety and design. They have previously convened Councils for Digital Good, collaborating with NGOs and academia to gather information and stimulate conversations on these issues.

Finally, the argument is made that children’s voices should influence both company rules and regulatory/legal rules. Microsoft actively involves their child participants in direct interactions with regulators, demonstrating their belief in the importance of children’s influence at various levels of decision-making.

In conclusion, the expanded summary outlines the significance of the digital environment on children’s rights and potential, the importance of empowering youth voices, the role of play and education in their development, engaging children in product development, responsible use of AI, integrating children’s online behavior into policy-making, and Microsoft’s efforts to involve children in shaping rules and regulations.

Afrooz Kaviani Johnson

The analysis emphasises the importance of actively involving children in decision-making processes and policy development, particularly in the area of online safety. The Convention on the Rights of the Child recognises and upholds children’s right to freely express their views. The speakers in the analysis highlight that by involving children, policymakers can tap into their creativity, skills, and unique understanding, leading to more effective and tailored policies and programs.

It is crucial to consider that children interact with digital technology in ways that differ greatly from adults. Therefore, their perspectives and experiences must be taken into account when formulating policies and programmes related to online safety. Including children’s insights allows policymakers to gain a better understanding of their needs, enabling the creation of more relevant and effective guidelines.

Several supporting facts demonstrate the benefits of involving children in decision-making processes and policy development. The Committee on the Rights of the Child proposes nine basic requirements for effective child participation, including transparency, voluntariness, respect, child-friendly approaches, inclusivity, support through training, safety considerations, and accountability.

Examples from Tunisia and the Philippines illustrate how children’s voices have helped shape national plans and legislation. In Tunisia, children’s voices played a crucial role in formulating the National Plan of Action on Child Online Protection. By consulting with children, policymakers were able to gain valuable insights and develop a plan that truly addressed their concerns and needs. Similarly, in the Philippines, consultations with children informed the development of a new national plan of action on children’s issues and other legislative instruments.

The analysis also highlights that involving children in decision-making requires careful planning, allocation of resources, and adequate training. In the consultations held in the Philippines, young adults from the communities acted as facilitators, ensuring that children felt comfortable and supported. Additionally, programming for parents and caregivers was implemented, and an emergency response plan was in place to safeguard children in case of any disclosures.

To conclude, actively involving children in discussions and decision-making processes is essential for developing effective policies, particularly in areas like online safety. The Convention on the Rights of the Child recognises their right to express their views, and involving them leverages their unique perspectives and understanding. Transparency, respect, inclusivity, and accountability are all key elements for successful child participation. Examples from Tunisia and the Philippines highlight how children’s voices can shape national plans and legislation. However, it is important to note that involving children in decision-making requires careful planning, allocation of resources, and adequate training to ensure meaningful and impactful participation.

Hillary Bakrie

The Protection Through Online Participation (POP) initiative aims to provide a safe online space for children and youth to access protection support. It emphasizes the importance of peer-to-peer support and encourages children-led solutions and initiatives. Hillary Bakrie, a supporter of POP, believes that the internet can be a valuable tool for young people to seek support and highlights the value they place on peer-created solutions. Young people also desire to be included as partners in decision-making processes, particularly regarding online safety and cybersecurity. This inclusive approach ensures that policies and measures are effective and relevant to their needs. To enable effective youth participation, addressing the digital divide and investing in education and skills are essential. Transparency, accessibility, and the recognition of young people’s contributions in policy-making processes are also emphasized. Overall, POP and its supporters advocate for an empowering online environment that values the expertise and experiences of young people.

Moderator

The discussion focused on the importance of child participation in policymaking, particularly in the context of online safety. Participants highlighted the significance of involving children in discussions and considering their rights in the digital environment. It was stressed that children have a unique understanding of their experiences online, and their perspectives should be taken into account when designing policies and interventions. The Child Online Protection Initiative (COP) and ITU’s role in implementing guidelines for child safety online were mentioned as important efforts in this area. COP aims to facilitate the sharing of challenges and best practices among member states. The ITU has been co-leading the initiative, providing support to countries in implementing the guidelines. The discussion noted that the involvement of children in policymaking can help ensure that their views and experiences are considered, leading to more effective and relevant policies and programs that address the specific needs of young users. The role of Microsoft in promoting child online safety was also highlighted. Microsoft has a longstanding commitment to this issue and has developed a suite of products and services that intersect with children’s online lives. The company engages in conversations with young people to understand their needs and enhance the way they interact with technology. Examples from Tunisia and the Philippines showcased the value of children’s input in shaping national action plans and legislative instruments related to online safety. In Tunisia, consultations with children helped shape the first-ever National Plan of Action on Child Online Protection. In the Philippines, involving children in consultations contributed to the formation of national action plans. The ITU In-Country National Assessment was proposed as a valuable resource for governments to improve child safety online. By conducting a comprehensive assessment of the existing situation and developing a strategy and action plan based on global best practices, countries can enhance their policies, standards, and mechanisms. Overall, the discussion highlighted the importance of involving children in policymaking and designing online safety interventions. Children’s participation ensures that their perspectives are taken into account, leading to more effective and relevant policies and programs. The involvement of youth in decision-making processes was also stressed, emphasizing the need for an inclusive approach that reflects the realities and aspirations of young people. The discussion recognised the value of partnerships between stakeholders, such as the ITU, Microsoft, and governments, in promoting child online safety.

Amanda Third

Children’s meaningful participation in the design of services and online safety interventions is considered crucial. The drafting of the UNCRC General Comment 25, which focuses on children’s rights in relation to the digital environment, was informed by consultations with children in 27 countries globally. This approach ensured that the key issues reflected not only the perspectives of adults but also the lived experiences of children themselves.

The International Telecommunications Union has taken steps towards promoting online safety by developing an online safety app, game, and trainings for three different age groups of children. What sets these initiatives apart is the involvement of a children’s advisory group, ensuring that the voice of children contributes to the creation of these tools.

To further support children’s participation, Amanda led the establishment of national child task forces in five countries. These task forces serve as guides for the government’s approach to online safety policy, emphasizing the importance of involving young people in crafting policies that directly affect them.

Youth participation in policy-making is highly valued and encouraged. Amanda suggests that shadowing decision-makers could enhance children’s influence in shaping online safety policies. Additionally, Amanda proposes that organizations’ platforms should actively seek young people’s input in a daily, approachable manner. This ongoing, real-time conversation would allow organizations to better understand children’s needs and preferences.

A notable finding from the consultations conducted in 27 countries is that children expressed their desire for improved online protections and data security. This highlights the importance of addressing these concerns to ensure a safe digital environment for children.

It is worth mentioning that attempting to restrict children’s online activities without considering their input can often lead them to find ways to circumvent such systems. Therefore, involving children in the decision-making process can lead to more effective and sustainable solutions, as children become active participants rather than passive subjects.

In conclusion, the engagement and participation of children in the design of services and online safety interventions are crucial. Through consultations, the UNCRC General Comment 25 incorporates children’s perspectives, ensuring that their unique experiences are reflected. Initiatives such as the online safety app and the establishment of national child task forces further demonstrate the commitment to involving children in shaping online safety policies. Encouraging youth participation and seeking their input in an ongoing manner will create an environment that better meets children’s online safety needs. By addressing their desires for better protection and data security, we can foster a digital environment that is safe and supportive for children.

Boris Radanovic

The analysis highlights the positive and impactful role played by youth in addressing various pressing issues. One notable example is the development of the Bully Blocker app by a group of teenagers, which aims to combat cyberbullying. This app demonstrates how youth-led initiatives can effectively address societal challenges, particularly in the realm of online safety. Another inspiring initiative is the creation of an online fake shop by a Polish high school student, intended to assist domestic abuse victims during the virus lockdown period. These examples exemplify the creative and innovative solutions that young people bring to complex problems.

Furthermore, the analysis emphasizes the importance of involving youth in decision-making processes regarding their own issues. It argues that discussions on how to support children often lack the direct participation of children themselves. However, in order to create valuable actions and solutions, it is essential to include youth input. The presence of youth-led advisory boards is acknowledged, but it is stressed that following through on their advice is crucial to ensure meaningful outcomes.

In terms of online safety, the analysis recommends government representatives apply for the ITU In-Country National Child Safety Assessment. This assessment provides a comprehensive understanding of the existing situation of children’s online safety and aids in drafting national strategies and action plans that incorporate global best practices. It is argued that such assessments can enhance national policies, standards, and mechanisms to protect children in the digital realm. Additionally, the analysis highlights the importance of local adaptations of global strategies, as local cultural, social, and regulatory differences impact the effectiveness of online safety measures.

The analysis also addresses the issue of children encountering adult or abusive content unwillingly on the internet. It argues that children do not want content that is not intended for them in their online spaces, emphasizing the need for adults to implement protections to prevent children from accessing inappropriate material. It acknowledges that the internet and its content were not specifically created for children and therefore, proactive measures are necessary to safeguard their online experiences.

Furthermore, the analysis recognizes that age verification poses a significant challenge in ensuring online child safety. However, it suggests that with children’s input, a solution can be achieved. It is concluded that working collaboratively with children and implementing their perspectives and ideas can lead to more effective and comprehensive measures to protect them online.

Overall, the analysis highlights the important contributions of youth in tackling critical issues, the need to involve them in decision-making processes, the recommendation for government action in enhancing online safety, and the significance of age verification in protecting children online. By considering these insights and recommendations, society can better empower and protect the younger generation in an increasingly digital world.

Session transcript

Moderator:
What, in this stuff? Whatever the technology is, remind me where you’re based. We have, we, yes. You should, you should, 110%. Okay, can I just be honest, no, no, no. You do understand my value structure is we better be able to do it. Photo DNA, that’s the way, did Liz catch you? Yeah, so if we need to do a couple hours of engineering work to make it operable, like, that’s the way we do it. Easier to, yep. So we’re ready. And listen, do you see the dead line? I’m with you. Do you see the dead lines? See? We can cover it right this way, yeah. Let’s do it. Oh, great. Can you hear me? Yeah. All right, again, thank you very much for coming. Let me welcome you to the workshop number 86, open forum number 86, Child Online, Child Participation Online Policymaking with Children. So I’m your moderator. My name is Preetam Malur. I’m the head of the Emerging Technologies Division at the International Telecommunication Union. And the reason why many of you don’t know me, probably, is because I’m substituting for my colleagues, Carla and Fanny, who are the subject matter experts on the topic. So Carla couldn’t come. There was a last minute cancellation, so I offered to step in. So please indulge me if I don’t use the right terminology or make some mistakes. But to my credit, I’m an expert on this topic because I have my child here, who’s sitting next to me. So since the topic is on involving children in policymaking, if you have any hypothesis you would want to test during this session, here is a subject. No, I can guarantee you he’ll answer it. The quality of the answer is questionable. But anyway. All right. It’ll be great, it’ll be great. So with this, let me just start with a few introductory remarks, and then, which will be very quick, and then we can go to our panel. So some points on the Child Online Protection Initiative. We’ve been working, we as an ITU, has been working on this topic since 2009. Initially, the initiative was founded to facilitate the sharing of challenges, best practices among member states, addressing issues of violence against children in the online environment. Of course, we’ve broadened the focus now, actively involving children in discussions, and considering all child’s rights and the digital environment, including the right to participation, education, access to information, and many others. And the activities are now balanced between protection and participation online. So just to give you a background on the Child Online Protection Guidelines, because I know many of you were involved in drafting that. They were initially developed in 2008, revised again, comprehensively rewritten in 2020 by an expert group of more than 30 organizations from the UN, from NGOs, the private sector, academic sector, you know, so it was a truly multi-stakeholder effort. A global program was launched in 2021, you know, set to run till 24, aimed at assisting member states in implementing these guidelines. And it’s been a success story. You know, they are currently being implemented in 15-plus countries across all regions. You know, they include capacity building. They include policy assistance for member states in developing the national strategies, policies, legal and regulatory frameworks. And there’s a lot of activities going on. And among some of the new collaborative activities is the alignment with the new participation-based approach. And ITU has been co-leading the POP initiative, you know, Protection Through Online Participation, with the Office of the Special Representative of the Secretary General on Violence Against Children. And we have a colleague from that office here. A few words on POP, you will hear more from her. It collaborates with over 30 global partners, including the UN, universities, NGOs, youth, private sector companies such as Meta, Disney, Lego, Microsoft, and Roblox. Hillary will tell you more about the effort. And in light of, you know, ITU’s efforts, alignment with global trends with regards to the work of expert organization in this area, discussions around how children can be best involved in matters that are relevant to them, and in particular with regard to child online protection and online safety are clearly more relevant than ever. And this is the emphasis on this session. So we have one hour, so I really urge you to stick to the three minutes for every intervention that’s allocated to you. So without any delay, let me turn to Afroz Kaviani-Johnson. Afroz, so I’ll start to, you know, I’ll start the discussion with a fundamental question. Could you share with us why it’s so important to work closely with children on matters that concern them? And more specifically, when we talk about child online safety. Afroz, over to you.

Afrooz Kaviani Johnson:
Thanks a lot. And a special welcome to your child as well. We’re delighted to have you around the table. Working closely with children on issues that affect them, like online safety, is crucial. And there’s a number of reasons. I’m just going to focus on three main ones. Firstly, it’s a right. The Convention on the Rights of the Child provides that children have the right to freely express their views on all matters and decisions that affect them, and to have those views taken into account. So it’s the right of every child without any exception. And of course, children encompasses like a very broad range of ages. The definition of a child is anyone under the age of 18. So obviously, it’s vital to adjust approaches to suit different ages and capacities. The second main point is that working with children really enhances programs, involving children leverages their creativity, their skills, their unique understanding of their own lives to create and monitor more effective and relevant policies, services and practices. And thirdly, particularly in this space of online safety, it provides real world relevance. So obviously, online safety programs and policies, which are typically designed by adults, will reflect adult concerns and may miss, you know, it will miss, not may, it will miss the nuances and, you know, the things that are important to young users. And children interact with digital technology in ways that are very different to adults. I can say this, you know, in my personal capacity as a parent, but also looking at, you know, the masses of research that UNICEF has undertaken with children around the world, around their online experiences. So consulting children, working with children kind of opens these avenues that we can explore and comprehend actual risks kind of versus perceived threats in the online environment. And it’s not just about listening. It’s really about ensuring that our policies, our programs, you know, are rooted in their lived experiences and tailored to their needs. So involving children, it’s not just beneficial, but it’s essential if we want to make our programs and our policies applicable and relevant to them and to achieve the purposes that we want to. Thanks.

Moderator:
Thanks, Afroz. In fact, you know, I’m always pleasantly surprised when I chat with my son about child online safety because they use an iPad at school. And, you know, the way they set their own passwords, the kind of protections they take, you know, the perspectives they have, you know, we usually discount them. And, you know, that’s to our disadvantage. Anyway, thank you very much. So now we have three speakers that I’m going to pose the same question, bringing three different perspectives based on obviously the stakeholder groups they belong to. We have Amanda Third from the Western Sydney University, Boris Radanovich, who’s an online safety expert from SWGFL, and Courtney Gregory, Chief Digital Safety Officer from Microsoft. So let me start with Amanda. But it’s the same question that I’m going to pose to all three of you. So the question of how children can meaningfully contribute to creating solutions to the challenges they face online. And again, the three that we have, academia, civil society and the private sector. Could you provide us with concrete examples of situations where children have played an active role in creating or developing solutions for online issues from your own perspective, own respective fields of work? And how did this outcome of your work differ from work that is solely driven by adults? So let me first ask Amanda.

Amanda Third:
Thanks so much. And I think I would start by saying that thankfully, there are now lots of examples that we can use to illustrate the meaningful participation of children and young people in the design of services and online safety interventions in particular. There has been a recent trend towards meaningful engagement, which many people in this room are a part of. And I guess what I would do is I would just highlight a couple of examples that I’ve had recent involvement with. The first would be to cite the consultations with children in 27 countries globally to basically inform the drafting of UNCRC General Comment 25 on children’s rights in relation to the digital environment, which is a piece of evidence-based guidance for states about how to implement the Convention on the Rights of the Child in relation to the challenges and the opportunities of digital technology. And that process involved working with child-facing organisations in those 27 countries, designing a creative and participatory-based methodology where children attended workshops of five hours in length. And five hours is very important here because what we wanted to do was to create enough space for children to actively explore the issues because often what we’re doing when we consult children about things is we’re asking them to talk about things about which they have a lot of experience and expertise, but they haven’t necessarily had an opportunity to put those things into words. So, allowing them enough time and space to really work through what are the issues, what do we know about them, how would we put our experiences into language? This is a really important part of meaningful engagement, I would argue. Anyway, the upshot of that is that we now have a general comment. The children’s perspectives were used as a check and balance all the way through the two-year drafting process. And now I think we do have, as a result, a general comment that really encapsulates the key issues from adults’ perspectives, but filtered through children’s own lived experiences of these issues. So, that would be the first one. Another one that I would point to, given the sponsors of this panel, is that yesterday the International Telecommunications Union released an online safety app, game, and set of trainings for three different age groups of children internationally. And that’s a very exciting moment. Again, this is another piece of work where we engaged a children’s advisory group from six different countries around the world, from memory, and they were with us right the way through, from conceptualisation to the refining of the final products. And I think what we know from these and many other examples, as Afrooz just pointed out, is that this does result in online safety interventions that are much better able to address children’s real experiences, to speak about those experiences from the perspective of a child, as opposed to this sort of top-down methodology. And I think there’s another initiative that the ITU has underway, taking the lead from the East Safety Commissioner, but maybe I’ll talk about that next time, because I think my three minutes are probably over. Thank you.

Moderator:
Thank you, Amanda. So, Boris?

Boris Radanovic:
Thank you. Hello, everybody. At SWGFL, we’re a not-for-profit charity, and for the first time ever, we’re seeing the next generation coming into the workforce. And just a couple of weeks ago, we realised that the young are leading the unexperienced who are managing a system created by the elders, whatever the system is, and that creates a lot of issues and a lot of problems. Thank you for the question as well. There are a lot of examples that we can find, especially in the European Union, pan-European and worldwide. I think those examples of youth-led activities, children-led activities, or apps, or many of those examples need to shine even more through. But I did manage to find a couple that I think are worth mentioning and worth definitely shining a light. There’s something called the Bully Blocker app. A group of teenagers developed an app called Bully Blocker to address online bullying. Think before you type a campaign, which was started to raise awareness about the consequences of online hate speech and cyberbullying started by teenagers. I Can Help movement, funded by a group of students that later on became a literal movement, and digital literacy initiatives by youth all over the world. But at the same time, I found an example that just amazes me. A teen in Poland, disturbed by the reports of rising domestic violence under coronavirus lockdown, a Polish high school student decided to launch a fake online shop to offer a lifeline to victims trapped in their homes. Victims could look at lipsticks and other forms of makeup, but look for help in the descriptions of those lipsticks of different kinds of domestic abuse. I think that just showcases the different way of thinking and the richness that this little angel next to you gives to this conversation and to many others. And I’m saddened that often, and I am not young, and I’m often the youngest in the room, the youngest person in the room, when we are discussing how and what to do and how to help children. So I would love to see in the future conversations about this, to have children around the table and discuss the same things and principles, because in the last 10, 25 years of our charity, it has been evident that we, as adults, do not have enough experience to connect with what children are living through today, because we had the fortune or misfortune of not having 2K, 4K, HDR-ready connected cameras all around us and having to basically put yourself out there in front of the whole world to see. So it’s sometimes really difficult to understand the issues they are going through if you are not ready to listen. But then I will take it a step further and then action on that. It’s really nice to have a youth-led board. It’s really nice to have a youth advisory board. But the conclusions and advice that are coming out of that require, I would say they demand from us action so we can create a better world for them, because it’s going to be their world, and we are just managing it for the time. Rather badly, but I think with time, if we are listening to them a little bit more, that will be helpful. Thank you.

Moderator:
Boris, that’s a good point. Even within the UN, I’ve spent 15 years here, and I can see, and I’m largely involved in running processes that are member-state facing, where I sit at the ITU, and I’ve seen the delegations change in nature with more youth involvement. Of course, the definition of youth changes from country to country. So in some countries, you’ll see a 35-year-old, 40-year-old youth delegate, and in some they are much younger. But still, it’s getting younger, the age of the delegation. And also, we’ve had specific consultations with youth on so many of these topics that I’ve seen earlier being just decided in closed meetings among traditional set of delegates. So hopefully it’s all changed for the better. Thank you very much. And I can assure you there is no angel sitting next to me. They’re calling you an angel, man. Okay. So let me go to Courtney. Courtney, over to you.

Courtney Gregoire:
Well, thank you very much. And just at the outset, I think it’s valuable to think about the concept of our conversation today, policymaking with children. And I respect that many of our conversations today are about how we ensure that children’s voices are at the center of laws and regulation. But when we’re talking about the digital environment, let’s be perfectly honest, there are multi-layers of what policy is made and what it means to have policy made by tech providers that equally impacts the ability of children to unlock their potential, their rights in the digital environment. It’s also worth just stating the fundamental reality, that the digital environment was not designed for children. And we now have to recognize how significant a role it plays in their lives. Microsoft has a longstanding commitment to child online safety. And we also recognize the need to think and evolve from a protection mode to truly an empowering youth voices as to how they can unlock their potential through tech. Microsoft has a suite of products and services that intersect with children from their gaming lives, their social lives, to their economic and educational opportunities of the future. And we think it’s pretty critical to understand at the core how children are using our technology to better design it to fit their needs. First and foremost, I probably have to thank every single other panelist here because the work you’ve done has informed how we think about product design and build it in through our standard. Whether it’s recently a conversation with Amanda in Australia, reminding me that, you know, as a parent, how well does it go when you give a list of thou shall nots? The top ten things not to do online does not exactly inspire the young people to think about how to unlock their potential. And has had a huge impact on me reshaping how we think about that. Giving you two concrete examples about how we put this into practice, Microsoft has convened three councils for digital good over the past couple of years and I really respect exactly what you said. We structure that intentionally to be a conversation. We thought at the first outset it was important to create a baseline understanding with our young people, ages 13 to 17, how we think about privacy, safety and cyber security and how that’s built into our products. But with that baseline, bring it on. Tell us how you engage with our services and our apps and how those can better achieve what you want. They did have a final project and actually one of the most wonderful parts of my job has been reviewing some of those final projects. They were responsible for saying what they wanted their digital life to look like in five years. And then how do we together co-create that reality. And one of the most fascinating things was their sense of responsibility to their friends and their peers. They understood that maybe they had not reported something they’d seen online, but now they understood they were doing it for their community. The feedback we also got, we should learn because we obviously own a gaming platform, that kids learn through play. And so one of our releases just about seven months ago, Home Sweet Hmm, is a fun and educational way to introduce young people to online safety within Minecraft where they already spend a bunch of time. I had to be told why it’s called Home Sweet Hmm. That’s because I’m not an average Minecraft user. But you may know that the Minecraft iconic villagers don’t speak but rather grunt hmm. But in the cyber safe adventure, this sound is also intended to represent that pause. To think about what it means to make sure you are setting the tone for what you want for your online future. So we think through play when we get that active engagement, we can better help bring that to life.

Moderator:
Thank you, Courtney. Again, I’m using him as a subject. Yes, that is true. Minecraft, you do hmm. But, you know, about a decade and a half ago, I was also involved in the Child Online Protection Initiative for a few years. Microsoft has been a steady partner of this initiative over many years. So, you know, thank you very much for that. All right. Hillary. Hillary Bakery, Associate Program Officer on Youth Innovation and Technology from the Office of the SG’s Envoy on Youth. Hillary, thank you for being here. So let me pose the question I have for you. So together with the ITU and thank you for highlighting ITU’s work, Amanda. So I know my colleagues have been working hard and, you know, so also a shout out to Carla and Fanny for the, you know, the new release yesterday. So, okay, Hillary. So together with the ITU, your colleagues present here today and many more partners from the UN, NGO, academic and ICT sectors, you’re working on POP, the protection through online participation. So can you share some insights about this initiative? Give us some ideas on how children and youth can be part of the solution with regards to violence against children and violence online? Hello. Yes.

Hillary Bakrie:
No, yeah, thank you so much for the question. It’s been a very exciting journey actually to be part of POP with ITU, the Office of the SRSG’s on Violence Against Children, working together with Amanda and many of their colleagues are in the room. So as you mentioned, the initiative is called Protection Through Online Participation. We call it POP for short, very youth friendly, children friendly in terms of name. And I believe the name also really speaks for itself, right? It has a vision of a world where children and youth can leverage from the internet, can leverage from digital platforms to safely access protection support either from official services or from their peers. And we often hear a lot about how the internet brings harm or there’s a lot of risks that comes with digital platforms. I think with this initiative we’re taking a slightly different approach. We want to explore the other part of that narrative, right? We want to explore how internet and other platforms could be used to, you know, do good impact, to empower, to create solutions that can help children and young people to stay safe. And with this initiative we’re actually doing a series of mapping, one of the mapping exercise that we are looking into is we’re looking to one, seeing how young people and children are using the internet itself to access protection support, but second, and I think this is one of the key unique aspect of the initiative is that we’re looking into the role of peer-to-peer support. So really looking into children-led solutions and initiative, youth-led solutions and initiative. And I think it was mentioned a little bit by Boris as well, how crucial this is. So we asked young people from around the world a few months ago to share and participate in this mapping exercise and through the survey we learned that the majority of young people, as obviously we assume, use online system and online platforms to seek support when they’re feeling scared, when they’re feeling unsafe or experiencing harm. And then majority of them either find this by themselves or actually through their peers. So this power of young people and children understanding themselves, understanding other children and young people, I think that really speaks for itself, right? Like no one understands children and young people better than themselves. And then I am now a millennial and then going at the end of my late 20s and then I cannot speak on behalf of Gen Z’s who are a few years younger than me. So I think there is also, it’s important to acknowledge that the power of that peer-to-peer support system matters. And from that finding as well, we learned that not only that young people and children have the agency to navigate this challenge when they’re feeling scared or when they’re experiencing risk in terms of harm and violence, but they also believe in the solutions and initiative that their peers created. And funny enough, when we ask young people and children if they know who made the solutions, if they know who made this platform, not many of them are actually aware if children and young people are part of the solutions that created this platform. But when we ask if children and young people should be involved in the design and the creation and the development of the solutions, the majority of them, I think nearly 80% of them really believe that young people and children of their age should be involved, right? So I think there was a little bit of this perspective, also mentioned by Courtney earlier on the importance of involving youth directly. And then I think Amanda spoke a little bit also on meaningfully engaging young people and children into the process.

Courtney Gregoire:
In Singapore, but we talked to parents and caregivers about the different conversations you want to be having with children, yes, zero to five, and maturing from there about how you understand children are using technology and being age appropriate in those conversations. It’s just worth noting that as a parent, that is a hard thing to do if you do this job daily. It does mean getting in there and co-play and co-understand how they’re using technology. Just as we think about how we help our children understand the world, we need to be thinking about how they play through technology. Lastly, there is a big question on the table, and I think one of the most interesting things we’ve seen through surveys from kids is the challenge they feel in the misinformation and disinformation space online. They understand the overwhelming nature of what’s coming at them and want the tools to help better understand and make rational decisions about the information they’re coming in contact with. There are opportunities to do that as we really think about content provenance and other spaces in generative AI. But if we don’t do that, thinking about what the information is communicating effectively to young people, I don’t think will really help navigate that new world order in the generative AI context.

Moderator:
Thank you, Courtney. Actually, you gave me two interesting pieces of information among all the things you said. One is that children are more open to asking for help online, which is an eye-opening statistic. And second is you gave some very good examples of the potential of AI to do good, because obviously the conversation is all on governance, you know, which are important conversations to have, the guardrails that are needed for generative AI. But you’ve highlighted the good that it can do and use the right way in a responsible way. Thank you very much.

Courtney Gregoire:
Can I just add, the research, what you said was surprising was surprising to us. So I love that of course children know that they turn online and turn to their friends for peers. We have to acknowledge that for those working at tech companies and in government, the fact that this was eye-opening for us to learn, they’re willing to ask those vulnerable questions of a digital technology that we wouldn’t. We now need to build that into how we think about policy.

Moderator:
Absolutely, absolutely. And I acknowledge that, yeah. All right, so Hilary, let me come back to you. Can you tell us a little bit about how children and young people would like to be involved? You know, what are they requesting us, the international community, to consider when it comes to policymaking in relation to online safety?

Hillary Bakrie:
Yes, well, the short answer is nothing about us without us. I think earlier I shared that through the mapping that the POP initiative did, young people noted that they believe youth should be part of the solution, children should be part of the solution. But even just a few days ago in the IGF Youth Summit, many youth activists and leaders also highlighted that when it comes to policymaking processes and policy implementation on cybersecurity, on online safety, on safeguarding human rights in this digital age, young people are still not included fully, right, in the decision-making table. And sometimes youth is consulted, and I think Amanda mentioned a very exciting trend that young people are consulted. There’s an increasing trend in terms of meaningful engagement. But when it comes to actually making the decisions, delivering decisions, young people are not yet included as partners, as I briefly mentioned earlier. And if you look at the bigger picture, nearly half of the world’s population is actually young people, however, less than 3% of the parliamentarian members are actually under the age of 30. And I think that number speaks for itself, right, the lack of representation of youth. And even for younger youth or adults and then children specifically, right. So I think we need to change this number. We need to make these spaces available for young people and have young people meaningfully included in the policies that really affect their lives. Most policies like on online protection, on cybersecurity and many others. And beyond access, many of young people have not only noted that access is important, access to the policymaking process is important, but we need to make an enabling environment, right, so youth can effectively contribute as partners. Like we need to close the digital divide and make sure that everybody has access to meaningful connectivity. We have to invest in young people’s and children’s educations and skills, right, not just technical and digital skills that will help them become experts and contribute substantively to the subject, but also skills that will allow them to navigate how the policymaking process looks like. And to build on this, we have to make information on policies, process, policies, processes, not only transparent, but also accessible, right, to every layers of community, both children, young people, and not accessible in terms of just language, but also taking into account what is the cost to access information, disability inclusion, and many other factors that could help it to become a more inclusive process for both children and youth. And lastly, I think many young people have also voiced out that they are all also working in this sector as well. Many of them are young innovators. Many of them are young people in STEM, or even contributing to policy process in regional or national level. I think really just reiterating what I’ve been saying, it’s important to acknowledge them as partners and experts in this, again, so that they could have an equal footing in this conversation, yeah. Thank you.

Moderator:
Thank you very much. Afroz, the next question is for you. We’ve heard what children and young people are asking for, but in practice, how can we actively involve children in policymaking that concern them? And can you provide some examples, you know, good practices or lessons learned from successful initiatives, meaningfully involving children in policymaking?

Afrooz Kaviani Johnson:
Yeah, thank you. I think I’m gonna pick up a lot of what Hilary just mentioned. I just wanna point to another general comment of the Committee on the Rights of the Child, which is number 12, which actually talks about child participation. And I think this is really important when we’re thinking about what makes effective child participation. And they talk about all processes in which children are heard and participate. They’ve got nine, nine basic requirements. So one, that they’re transparent and informative. Two, that they’re voluntary. Three, that they’re respectful, child-friendly, inclusive, supported by training. I think it’s a big one. It’s not just something that happens. The people that are facilitating, the children that are participating need to be supported by training. Another really important one, safe and sensitive to risk, recognising that it’s not always a safe process to engage children on some of these sensitive issues. And sometimes even when it seems like it is fine, things can come up. So being ready for that. And then very importantly, accountability. So being accountable as well. So I’m just gonna share two quick examples in my, I don’t know how many minutes I’ve got left, very few, that strive to kind of embody those requirements. And so they’re examples from my colleagues around the world. So firstly in Tunisia, where Children’s Voices have helped shape the first ever National Plan of Action on Child Online Protection. And the impetus for this plan actually came from children. It came from a qualitative study with children about their online experiences. And there was a series of focus group discussions around the country with girls and boys aged 11 to 17, taking place in different parts of the country. So really trying to make it accessible and inclusive. And children were consulted not only for the input into the plan of action, but also to kind of validate and provide feedback on the draft plan of action. Children that were involved in the consultations, it wasn’t just your usual kids on school councils or a convenient sample of children, but children who were in school, but also out of school, and those also living in alternative care. So even in residential care facilities. And I think there were a lot of insights from that process that wouldn’t have been garnered if it had just been an adult-led process, just insights as to the topics that were most important for kids and privacy and data protection, coming out really strongly. The kids shared preferences on how they wanna receive information, be it peer-led initiatives or online or school programs. So it was a process in which children could not only kind of voice their concerns, but also help shape the measures that the country is now gonna take. The second example, which I’ll summarize very quickly, is from the Philippines, where there’s actually a longstanding practice of child participation that’s being refined and kind of improved over time. So our colleagues at UNICEF recently supported a series of consultations to inform the new national plan of action on children. And they’ve used similar methodologies for informing other pieces of legislation, including the recent legislative instrument on online exploitation and also one on child marriage. So just some kind of success factors around the methodology. The facilitators are young people. So they’re not older adults. They’re adults, but they’re younger adults in the scheme of things. And they’re young adults that have been trained over years and supported in this process. The facilitators are also from the communities in which children are from, right? So there’s already trust and there’s already a relationship there that, and yeah, it makes it more accessible as well. There’s minimal adult intervention, or I should say older adult intervention, so that children can, they feel safe to voice their perspectives. They’re not intimidated. They’re not influenced. Interestingly, there was also programming for the parents and caregivers to the side, right? Because it’s not always that easy to gather children and consult children. Well, what are the parents gonna do during that time? So there was programming to the side to engage them in parallel sessions. There was an emergency response plan in place for safeguarding, and there were social workers ready in case there were disclosures. And in fact, there were disclosures from participation. Just quickly, the methodology was child-friendly. So a lot of games in terms of inclusivity. A lot of diverse groups of children were involved, children with disabilities, children who were in alternative care, children in street situations. But you can’t just bring all these kids from all these different situations together without some careful planning, thought, preparation. And what I loved when my colleagues were telling me about this was that they wanted every child to go home feeling like they were seen and heard through that process. And that really stood out to me as kind of a principle. And I think just to close up, some of the lessons learned, especially from colleagues in the Philippines who have been working on evolving these practices of child participation is that it takes resources, it takes funding, it takes deliberate investment in kind of the capacity of facilitators over time. So some of the young people that are now facilitators were actually consulted, you know, at times during their childhood. So there was that nice kind of building of that capacity over time. So I’ll leave it there. Thank you.

Moderator:
Thanks, Afroz. Very interesting information. I think we have 15 minutes, so we’d like to have at least six, seven minutes of Q&A, if not more. So let me quickly move on to Boris. Boris, so you’ve concretely worked with the ITU to draft national strategies on child online protection in several countries. Can you tell us a bit more about this work and how it positively affects children’s well-being online?

Boris Radanovic:
Thank you. I think that’s the best part of my job, honestly, from the last 10 years I’ve been doing that. I’m going to call this a love letter to your government. Whoever government is looking at this and listening for the last three days of IGF and you’re wondering, where am I supposed to start? What am I supposed to do? ITU In-Country National Assessment is where you should start. So basically, the principle is, if you apply and discuss it with ITU, you’ll get the support. The worst part of that support, you’re going to meet me, but everything else is awesome. So National Child Only Safety Assessment. I honestly urge each and every one of them, especially the government, the government representatives listening to this, that you consider applying for this. This includes a comprehensive assessment of the existing situation, the development of a national strategy, and then the action plan, the much-needed action plan with recommendations based on global best practices. I had the pleasure of visiting many countries, and the difference of 50 to 100 miles does in culture, approach, consideration, and data behind it is just remarkable. Marking that we can have global solutions, but we need local adaptations, and they need to be carefully, carefully managed. With this, you not only enhance your national policies, standards, and mechanisms, that you can ensure the safety and well-being of children in the digital realm, but for all children in the entire country. I heard a lot of words in the IGF in the last… power words in the last couple of days of inspiration, speakers of many… paradigms, new shifts, and I love it, but I come from a non-profit sector, we are there for impact and action. So the time is now that you can apply for that, and I think this is the first step for any government official in considering where to start. This is a beautiful first step, and you need to take that step as a responsible government to understand where you are right now so you can understand where you want to go. Do not leave children of your countries just behind by not adapting simple actions that have wide-reaching consequences to protect them literally immediately while we are doing the assessment. I’m going to tell you a little bit what it is. And all of us, but especially anybody working in the government, have a duty to protect children in your country, and we want to help you with that, and pretty much that is it. Let us help you skip years, and in some countries decades, of stumbling in the dark and endangering children by your lack of knowledge and experience and just awareness of the global best practices or what to do and how to skip some of those issues. Please do reach out to ITU to start the process, and we can together create a better and safer environment for all children in your country, but for all children in the world than afterwards. By working together, we can build that environment, but we have to understand that while understanding the issues on a global scale, we need to understand that each of those issues is represented really differently in each of the countries. So how does it look? It’s rather simple after the application process starts with ITU. There is research that goes on first, and we love to do that research because we send it at the same time to children and parents, and some of the questions are the same, but the parents and children don’t know that they have the same question. So in some countries, parents would say that 80% of parents feel that their children will always talk to them if they found any issues online, and then 10% to 20% of children would really speak to them. So it’s really evident from the get-go that we as adults have a totally different picture of what is happening on the ground. Then happens the interviews, and I think that is the best advantage of this process because we do a multi-stakeholder interview, which is basically a marathon interview, 12 to 14 hours a day, with every part of the government, NGO, industry, stakeholders, every part of the internet society in each country, representing and asking them similar questions or sometimes even exactly the same to see those different perspectives, then combining that into a report, then providing the positive examples on a global scale, and literally at the moment of an interview, you can find the gap. In some countries, child sexual abuse material was illegal to distribute and to download and stop there. Then we asked a simple question, what about possession? It’s not in the law. So we just added one word in the law, and suddenly the police could have an action. In another country, they were really proud that they have a cyber-bullying law, but the cyber-bullying law only applied if you are cyber-bullying a child in your own school. So if you’re cyber-bullying a child in a different school, the law doesn’t apply. There are many, many, many other kind of simple solutions or simple fixes to maybe some good-willing actions that are already being made, literally seen in a couple of hours, if not in a couple of days. So this is like a bespoke assistance done by experts in the most sensitive way, especially listening to the voices on the ground, at the same time interviewing and listening to children going to schools and as well, putting their experiences through the report as well to shine through. And I think if you’re thinking about doing anything, especially after this wonderful IGF, I honestly believe this is a great first step, and we can only, like that, I think in that way, create a better and safer Internet for us all by thinking globally but looking at how to implement it locally. Thank you.

Moderator:
Thanks, Boris, and again, thanks for highlighting this important work of the ITU, but of course, we wouldn’t have been able to successfully do it without partners such as you, and all of you are on the table, so thanks again. So I think this is the last question, and it’s to Amanda. So you’re currently working with the ITU to involve children in the development process of national strategies related to child online protection. So can you share a little bit more about this effort and what you’re expecting to gain from it?

Amanda Third:
Yeah, sure. So I am indeed leading a piece of work with the ITU around the development of national child task forces in five countries to support the development of national strategies on the ground, which is a really exciting piece of work that I’m very happy to be part of. But it builds on a previous piece of work going back two years now when the eSafety Commissioner in Australia commissioned the Young and Resilient Research Centre that I direct to design a national task force to guide the government about online safety policy and programming across the country. And so what we did was we worked with a group of young people aged 10 to 18 to really dig deep into their experiences of online safety, but also of online safety interventions to get a really sort of like nice deep snapshot of the strengths and limitations of all of the good work that’s going on. And the eSafety Commissioner was very, very invested in doing this work so that they could really understand whether or not their messaging was hitting the mark and whether or not there were any sort of impacts emanating from some of their interventions. And certainly we did come across those. But also young people in that study gave us a really strong reality check on just where messaging is failing to land. They really reacted very strongly to top-down messaging, the list of don’ts. We all know that, don’t we, from our interactions with children in our everyday lives. But also they really spoke passionately about the ways that they felt that they had this expertise that could be channelled meaningfully into policy-making processes. So then we worked with them after we’d identified the strengths and limitations of the messaging and programming. We then worked with them to co-design a national youth advisory. And one of the beautiful things about this process this time around was that the old people like myself, I’m never the youngest person in the room, Boris, the old people like myself were not face-to-face with young people. We trained a team of youth researchers, sort of around the age of 18 to 20, to implement this work. And wow did it really, it was a complete game-changer because those young people opened up in ways we just hadn’t seen before. So that was really, really exciting and subsequently we’ve gone on to design a co-research, a youth co-research toolkit to support young researchers to be part of teams. And I’m happy to share a link to that if anybody would like it. But out of this process we’ve got a sort of, you know, young people designed a mechanism if you like, whereby there’s a group of young people who are diverse, they’re appointed over a two-year period and they guide and shape the government’s approaches to online safety policy and programming. It’s early days yet, we have only just started to implement that program and indeed it was implemented rather more quickly when a federal politician took a liking to it than we had anticipated. But so we’re sort of, you know, we’re just working out some of the bumps and roadblocks. But now what we hope is that we can now translate this process, working with people on the ground in different countries to culturally adapt this process and to create an ongoing mechanism for children to feed into that process of developing national strategies in their various countries. So I think what I would say is that I think it’s very, very encouraging to see that so many different entities from Microsoft to national governments, you know, to NGOs embracing advisory mechanisms to guide their work. I think though we, let’s not rest on our laurels. It’s really wonderful to have these mechanisms but we need to make sure that they stay fresh and that they remain open to young people’s insights and perspectives. Because they can get ritualistic, they can become kind of tick the box, as I said in an earlier panel today, tick the box kind of mechanisms and so we, and I think we need to remember too that not all young people’s experiences can be appropriately reflected through formal advisory mechanisms, right? So I think two things we should be thinking about in particular. The first is to level up on Boris’s challenge, not just to actually respond to children’s and young people’s insights, which is absolutely important, but really to reflect on our own processes, to think about the ways that we run things as adults in a quite like closed door sort of way often, to reflect on our processes of decision making and to really expand those and make new spaces for young people to become part of an ongoing real time conversation rather than the thing that we do when we need some input, right? How do we embed it? For example, could there be young people shadowing important decision makers, guiding them on what children and young people might like to do, etc. So things to think about there. So how do we transform our institutions so that young people can genuinely influence the agenda? But also how could we use our products and platforms to be seeking young people’s input on issues that relate to policy in a really everyday way that taps into their everyday interactions, right? How can we sort of seek their input, make spaces for people’s input to be channeled through to the decision making process inside organisations? So I think, like I say, I’m really excited and encouraged to see the ways that these youth participation models are being embraced around the world. It’s just so, so fabulous to see how far we’ve come in the last decade when we really were incredibly focused on protection and not really thinking about these questions of participation. But I think also let’s continue to think creatively about participation. Let’s not rest and let’s just think of this question of participation as one we won’t resolve and we have to keep paying attention to as things unfold. Thank you.

Moderator:
So thanks, Amanda. Your call for action was a good closure to the formal set of questions. So let me open it up for Q&A. I think we can take five minutes. So I have an online question, so let me start with that. In addition to the UN Youth Envoy and ITU’s plan to empower young minds against cyberbullying, would you take that? Thanks.

Hillary Bakrie:
Thank you. I think we had a chance to see the full question that was shared online and thank you so much also for sharing that. If I’m not mistaken, I think the question also highlighted that many young people have been working on solutions that address, solutions that help other young people and children to navigate cybersecurity and whatnot, so I think it’s actually exciting to see a real-life example of that. It would be really great, actually, if you are a young person or children or adolescent who is working on these solutions to be in touch with us. I think if the online moderators could help out, perhaps we can pop on the link of our actually initiative protection online participation because we actually want to not only hear from you but also learn from you specifically. How have you been building the solutions? Why did you decide to make the solutions? And really learn from your expertise as children and young people with lived experience and who are navigating through these challenges of cybersecurity and online protection, and we want to partner with you, as we indicated earlier in the panel, that it is important to recognize you as partners. So actually, if you would be open to connecting with us, that would be really great, and then perhaps the online moderators can help share the link, how you can get in touch with us. And then also, I think just a quick shout-out again to ITU and the other interagency partners like the SRSG’s Violence Against Children’s and also UNICEF, who has so many resources that has been building capacity of children and young people to contribute into this space as well.

Moderator:
Thank you, Elriana. In fact, the online moderator, if there are other questions that are being posed online, I just don’t have access to that. So please, you know, ask for the mic and you can read out the questions. Meanwhile, anyone here who, ah, yeah, please.

Audience:
Hi, everyone. I’m Marie Stella from the Philippines. So I’m just wondering, based on your interactions with the youth and the children, did they say anything about age verification? Because adults like us can’t seem to find the perfect solution to age verification. Do they think that we even need age verification online? Or is awareness-raising education enough to make sure that they will not access illegal or harmful content online? Thank you.

Boris Radanovic:
I’ll try to be quick and give space to others. It’s a wonderful question, and thank you so much. I think over 17 countries, I had the pleasure of speaking with children, they do not want content that is not intended for them in their spaces. And sometimes that is evidenced by data. A significant proportion of children encounter content of adult content or any other abuse content unwillingly. That’s the problem. That’s step number one. Step number two, we as adults do have to implement protections for children, whether they consider and we should consider their opinions about it, just because the internet and the content there is not created for children. So we need to find a way how to make it safer. Unfortunately, the solution to age verification is one million-dollar question, another one. But I think we will get there, and we will find a way to do so, but we will get there with children, because those spaces need protection so they can feel safe and they can feel protected. I hope that helps a bit.

Amanda Third:
I would just add quickly that when we spoke to children in 27 countries for the general comment 25, they were very, very clear with us that they wanted better protections online. They were also equally clear with us that they want their data to be protected, and they want to know how their data is being protected and why. So our solutions around age assurance, age verification, need to balance these tensions, obviously. But I think also they were also quite clear with us that systems that are sent up to prevent them doing things are often an invitation to subvert them. So I think we also need to think through the implications of protecting children through a range of mechanisms, and to think about, actually, does that keep a child safe all the time, or does it make them unsafe in ways that we then can’t deal with? So I think there’s a very complex set of questions there that we need to work through, and as Boris says, in partnership with children.

Moderator:
So let me first, before I give you, so I have the question now, and let me acknowledge the person who posed the question. It’s Omar Farooq, a 17-year-old boy from Bangladesh who’s working actively to ensure children’s rights and mental health. He’s the founder and president of Project OMNA, which is an AI-powered upcoming mobile project focused on children’s mental health and child rights.

Audience:
Thank you to the panelists. I’m Dora from UNICEF. I just wanted to ask maybe Microsoft, but open to the other companies as well, in context where child participation or democratic participation is more restricted, what could be winning arguments to engage the companies to do child participation in the design, or what could be winning strategies to convince them?

Courtney Gregoire:
A great question, and you opened a new window in my brain to acknowledge, and it’s interesting because we have, as I mentioned before, Microsoft had previously convened Councils for Digital Good and leveraged this as an important mechanism to have a conversation about how to think about safety design. As we’ve thought about how to scale, we really had taken a step back and said we want to work with existing organizations who think day in, day out about how to engage child participation and leverage their understanding and the information they can gather as we think about product and safety by design. Our expectation is that we should be doing that with the NGO community and the academic community who are building in, everyone here has said, the fundamental principles that we need to be thinking about to meaningfully engage children, that we are representative, that we are doing it in a safe environment, that they have the trust, and we’ve had that moment of, okay, that’s how we should be leveraging the ecosystem. I think we need child voices and participation at both layers. I’m sorry, they have to influence. And one thing that I failed to mention before, we had thought about our child participation and the councils to inform, yes, product design, but we opened up the door to have them talk directly to regulators so that it really was the true multi-stakeholder engagement that we wanted to hear out of those voices, whether that was with Ofcom or Archon, so that that was, they knew that they should be influencing all rules that impact them from a regulatory and a legal perspective to the rules of the road set by companies.

Moderator:
I’m getting a signal that we have run out of time. Let me quickly close by thanking our speakers. It was a fantastic panel. So Afroz, Amanda, Boris, Courtney, Hillary, thank you very much. I guess one takeaway which is very clear is the influence of children in not only the design of the online spaces is unquestioned. It’s also important that we as decision makers, as product makers, we make sure that the voices are properly heard and taken into account meaningfully. So again, thank you very much. Let’s probably end quickly with a round of applause for the speakers. And we hand over the room to the next set of people. A series of government meetings. And then fly tomorrow evening home to Seattle. Thank you. Yes. Thank you. Thank you. I know. We really do. I’m excited about it though. Including, I want to have a conversation with WeProtect about thinking about how they could potentially be a convening room. But then that is to me, I think so. So starting with Ian and saying what do you think?

Afrooz Kaviani Johnson

Speech speed

180 words per minute

Speech length

1476 words

Speech time

492 secs

Amanda Third

Speech speed

166 words per minute

Speech length

1901 words

Speech time

686 secs

Audience

Speech speed

144 words per minute

Speech length

148 words

Speech time

62 secs

Boris Radanovic

Speech speed

217 words per minute

Speech length

1907 words

Speech time

528 secs

Courtney Gregoire

Speech speed

200 words per minute

Speech length

1385 words

Speech time

416 secs

Hillary Bakrie

Speech speed

192 words per minute

Speech length

1606 words

Speech time

501 secs

Moderator

Speech speed

171 words per minute

Speech length

2437 words

Speech time

853 secs

CGI.brโ€™s Collection on Internet Governance: 5 years later | IGF 2023 Open Forum #98

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Vinicius W. O. Santos

The open forum, co-organized by CGI.br (The Brazilian Internet Steering Committee) and ICANN (Internet Corporation for Assigned Names and Numbers), aimed to facilitate the exchange of experiences and knowledge on memory and documentation in the field of Internet Governance. CGI.br presented its efforts in designing a specialized library on Internet Governance and publishing official documents related to the subject.

Discussions at the forum focused on the challenges of managing and retrieving information in the dynamic field of Internet Governance. Participants expressed a strong interest in creating a comprehensive collection to organize, preserve, and retrieve the wealth of information generated in this rapidly evolving domain.

The project initiated by CGI.br and ICANN has grown significantly, incorporating various tools, practices, methodologies, and content types. This progress demonstrates the commitment and dedication of the organizations involved in ensuring accessibility and availability of valuable resources in Internet Governance.

Looking ahead, the session addressed the challenges and future prospects of the project. There are various obstacles to be addressed in the short and long term. The forum aimed to provide an overview of the project’s progress and foster discussions on overcoming these challenges.

Partnerships and collaboration emerged as fundamental pillars for successful work within Internet Governance, particularly in informational and archival projects. The absence of partnerships could impede the implementation and deployment of these initiatives, while collaborative efforts form the foundation of achievements in this field.

In conclusion, the open forum organized by CGI.br and ICANN served as a valuable platform for exchanging experiences and knowledge on memory and documentation in Internet Governance. It highlighted the importance of specialized libraries, the publication of official documents, effective management and retrieval of information, and establishing partnerships and collaborations for success in this domain. The project has evolved significantly, but challenges remain. Future prospects were discussed for continued progress.

Audience

The role of libraries in providing access to knowledge and the internet is significant, with a major impact on internet history and facilitating access. They bridge the digital divide by ensuring access to information and resources. In the context of internet governance-related events, having content available in the Brazilian language is seen as crucial for effective engagement and reducing inequality. Cgi.br collections greatly contribute to qualifying Brazilian participation in internet governance-related processes, while systematic organization and documentation are commended. Additionally, there is a call for a larger project on internet data archive that would benefit researchers, policymakers, and civil society. The use of machine learning algorithms for data categorization and the importance of partnerships and collaboration in internet governance projects are also highlighted. The absence of a taxonomy initiative in internet governance sparks curiosity, and the library sector recognizes the importance of the internet in delivering services and promoting information dissemination. Libraries are seen as more than just buildings, playing a vital role in educating for digital and information literacy. The close connection between libraries and internet service providers is emphasized, and there is a suggestion to explore the use of large language models for taxonomy extraction. Overall, the analysis underscores the significance of libraries, language inclusivity, collaboration, and technological advancements in internet governance.

Jean Carlos Ferreira dos Santos

The project at hand is primarily concerned with the development of the CGI.br and NIC.br collections, encompassing various activities such as documentation, publishing, and the design of a physical library. To efficiently manage and organize the collections, the project relies on the usage of tools like DSpace for the creation of digital repositories, Koha for library system operations, and ViewFind as a discovery tool.

Additionally, the library serves as an educational resource for capacity-building initiatives, providing valuable information and resources to enhance knowledge and skills. However, the project faces certain challenges. One of these challenges is the construction of controlled vocabularies, a crucial aspect in the field of internet governance. Likewise, the implementation, development, and maintenance of open-source tools present significant complexities.

Nevertheless, the project also brings forth several potential benefits. Collaboration and dialogue within the IGF community, for instance, can lead to numerous fruitful collaborations and exchanges of experiences. The project is open to establishing networks with other organizations and aims to maintain constant dialogue with the IGF community.

Furthermore, the project recognizes the potential of machine learning algorithms in the categorization of documentation, using them in the OECD AI Policy Observatory. Additionally, NIC-BI produces a substantial amount of data, reflecting the project’s commitment to data production.

However, challenges concerning data description, preservation, and reusability are noticeable. Describing the data effectively, preserving it for future use, and ensuring it can be reused poses significant hurdles that the project aims to address.

To improve data usage, the project actively seeks tools and standards that can be employed to utilize data more efficiently. Additionally, by incorporating standards and practices from library studies, the project hopes to organize the vast amount of content produced in the field of internet governance.

Standardised identifiers are deemed crucial for better content recovery and preservation within the internet governance community. The usage of digital object identifiers is recommended to prevent the loss of content.

A noteworthy objective of the project is to establish a taxonomy for internet governance. This initiative requires the cooperation of various stakeholders, including information science professionals, the technical community, and others. Creating a taxonomy will enable better organization and understanding of the interdisciplinary aspects of internet governance.

The importance of specific terms and concepts is acknowledged as aids in understanding the boundaries and components of the field of internet governance. By having a shared understanding of these terms and concepts, stakeholders can effectively communicate and collaborate.

The library created by the project seeks to meet the information requirements of the internet governance field. It has been designed to provide a comprehensive range of resources, expanding beyond traditional books. Additionally, efforts are being made to enable the Brazilian community and others to contribute to internet governance through the library.

Lastly, the project emphasises its openness to collaborations and contributions. By actively involving the community, the project aspires to build a stronger and more inclusive internet governance framework. After the session, the project encourages open dialogues, welcoming any questions or further discussions.

In conclusion, the project focuses on the development of the CGI.br and NIC.br collections. It employs various tools and technologies to manage and disseminate information effectively, while also addressing challenges such as the construction of controlled vocabularies and the implementation of open-source tools. Collaboration, dialogue, and the utilization of machine learning algorithms are recognized as valuable assets. The project also emphasizes the importance of data description, preservation, and reusability. By incorporating library studies, standardized identifiers, and establishing a taxonomy, the project aims to enhance information organization and understanding within the internet governance field. Furthermore, the project seeks to build a comprehensive library, engaging the community and encouraging collaborations and contributions. Overall, it demonstrates a commitment to the continued development and improvement of internet governance practices.

Session transcript

Vinicius W. O. Santos:
Good morning to everyone, the few ones that are with me here in the room, but also to those that are remotely following us. Also good evening and good afternoon for those in different time zones. Thank you for attending our open forum. This open forum called the, named as CGI.br’s collection on internet governance. Five years later, our intent with this session is to build upon a previous work that we have been done and also that have been presented here at the IGF to this community and to also advance these dialogues, advance some discussions, advance partnerships and so on. So I’ll just make a brief start and a brief description of what are we doing here and what are the main concerns we have and a bit of history, but very brief and then we go to the main presentation. Thank you. So this open forum, as I said, draws upon a previous open forum that was held at the IGF 2017 in Geneva. In that one, more than five years ago, almost six right now, called Memory and Documentation in Internet Governance, the Challenge of Building Collections. We described our history and the importance of building collections, shedding light especially to initiatives focused on developing internet governance. It was a very interesting session which provided a productive debate between participants and rich inputs on documentation processes and challenges in creating collections on internet governance. That time, the open forum was co-organized with the Internet Corporation for Assigned Names and Numbers, ICANN, as means to exchange views and experiences within the initiatives of both organizations in terms of memory and collections in internet governance. ICANN emphasized its initiative focused on documenting, organizing and preserving institutional information and memory. In that time, ICANN was also releasing and launching a new initiative together with us. We as the Brazilian Internet Steering Committee, CGI.br, presented our initial efforts to design a specialized library on internet governance. We first introduced to the IGF community that time what we were planning and had recently deployed within CGI.br regarding documentation, memory and collection in internet governance. We also shared our experience on CGI.br’s documentation and outreach initiatives, including our publishing initiatives which cover different books and other publication formats, as well as various CGI.br’s official documents. Some questions really guided us back then in that discussion. I think it’s very important to mention those questions also now for us to move forward. In that time, we were talking about how do we establish practices, processes and technological tools to organize and retrieve information in internet governance fields? What are the particularities and challenges of managing and retrieving information about internet governance? This was the core of the discussion in that moment. These concerns emerged from the interest in creating a collection to organize, preserve and retrieve information produced in this field. Of course, we were dealing specifically with our activities as the background, so the activities of our organization, but we were also trying to envision how this experience could dialogue with other initiatives, other organizations, other types of work. From that time until now, the project has evolved in many aspects. The debate on the session back there allowed us to advance in the project to create and develop CGI.br and NIC.br collections. We at the Brazilian Internet Steering Committee and the Brazilian Network Information Center, we started to grow this set of tools, practices, methodologies, and also contemplating more and more activities and types of publication, types of content, and trying to organize them all. This time, we are now at this session to show to you, to present how this work has been evolving since there, and what are the main achievements of this work until now, and what are the main concerns we have at this moment, but also looking further. What are the challenges we are looking ahead, and how do we expect to address them in the short and long term? This is more or less the intent of the session, the basics, I would say, and I will pass now the word for the floor to my colleague, Gilles, that will present the actual status of our project right now, and then we will get back to the audience and hope to have some sort of conversation with those interested in these themes, and maybe some questions and answers, and we are available to any sort of dialogue. Thank you. Jean, the floor is yours.

Jean Carlos Ferreira dos Santos:
Thank you, Vinicius. Good morning, everyone, to attending our Open Forum TGI Collection on Internet Governance five years later. My role here is to report how it’s going since the previous Open Forum five years ago, and the idea here is to report the main activities of our collection. So TGI.br activities includes the production and dissemination of information on the use of development of internet in Brazil through its operational arm, the Brazilian Network Information Center, NIC.br. There is a wide variety of materials created by TGI and NIC.br, such as books, guides, reports, minutes of TGI.br meetings, resolutions, and technical notes and other outreach materials. During the pandemic, the number of audiovisual content has also grown significantly. Many events moved to the online format, and the videos, for example, were made available on YouTube NIC.br channel, and different areas in NIC.br also start producing different contents, for example, podcasts. And in order to not risk or afford to be just on more out of so many books, videos, images, and sounds, we developed permanent and well-established documentation process to preserve and make all that content available to Brazilian audience. So I would say that we have mainly three pillars that support the development of TGI collection. The first pillar is documentation of TGI.br activities, which involves organization, classification, and retrieval of minutes, resolutions, and other activities carried out by TGI.br. All the official documentation produced by TGI are made available online in our website. Another aspect involves the documentation, recording, and preserving memory of TGI.br’s activities and process. Making them recoverable is one of the main challenges. It’s necessary to thematically sort and catalog this internal information for use in TGI.br process, enhance transparency, and make it accessible to anyone interested in referencing or repurposing this content. For example, researchers, students, and anyone that would to understand how TGI works and how or what TGI do. And the second pillar encompasses our publications. For instance, TGI.br book series, this is just an example that the publication that we made, TGI.br book series is a book collection that started in 2014, focuses on references and studies on internet governance in both printed and online format. This series had a significant impact on the local community. These book collections aim to provide the community with access to essential references for internet governance. When the documents are not available in Portuguese, one of the steps includes translating these documents, these contents, into Portuguese. Just to mention some interesting examples, we have translated the WSIS Declaration, which was the first CGI book series that we published. The Jovan Corbaliha book, Introduction to Internet Governance, it’s a very important reference to understand what is internet governance, was a cooperation with Diplo Foundation. We have translated some reports through collaboration with the Internet and Juridiction Policy Network. For example, the Global Status Report and other documents. This publication are sent to a large number of libraries in Brazil. Our libraries partner network that receives CGI and TGI.br publications has approximately 700 entities across the country, including university libraries, public libraries, libraries of research institutions, institutions from third sector, civil society, and the business sector, among others. The books are also adopted by different capacity-building initiatives in different parts of Brazil, such as universities, schools, and others. So these publications are very important in different contexts in Brazil. Furthermore, it’s worth underlining that CGI and TGI.br embrace an open access model, so our publications are freely available for download on our website under a Creative Commons license. The Project Third Pillar involves designing a physical space to house the physical collection and support the community interested in internet governance subjects. In this sense, we have worked on creating a space with a specialized physical library, which brings together reference printed pieces on the most diverse topics in internet-related activities and areas of knowledge. This library is based at NIC.br headquarters. While working this way has been to conduct a bibliographical survey, monitoring the bibliographical production on internet governance and related subjects to integrate it to the collection, so we are permanently obtaining these new books, new publications on internet governance. Currently, the library has about 1,300 items. We have been prospecting the main academic bibliographic production related to internet subject, technical, social, legal, and others, reports, periodicals, technical documents as well, and also to document and store the entire CGI and NIC.br bibliographical production, so the idea is that the library protect and preserve the production of TGI and NIC.br. So it’s a diverse collection reflecting the internet’s interdisciplinarity. The library also supports NIC.br teams in their activities. It is also an educational resource useful for capacity-building initiatives, such as the Brazilian Internet Governance School and the Youth Brazilian Program, so they can use this library. So considering these three pillars, the project’s second phase aims to identify and sort the materials produced by CGI and NIC.br and estimate the amount of digital items and other materials, and then from there prospect tools and standards for organizing and making these materials available. So considering these three pillars, documentation, publishing, and design the physical library, the following step of the project was to focus on identifying specialized infrastructure for creating and managing collections. So we have been working a lot in prospect and choose good tools to create this collection, to make it useful. So at this stage, the support of the Brazilian Institute of Information and Science and Technology, IBICT, was essential. IBICT is a federal institutional organization in Brazil that gave us free consultation and support on exploring suitable tools and standards for creating, managing, and making information sources available. IBICT also guided us to identify cataloging good practice and standards for bibliographic items and digital objects. And considering the amount of existing materials, their formats, and the project proposal of organizing and enhancing their retrievability, the necessity for the following technology was noted during the process. We need an integrated library system used for book description and cataloging, creating of the online catalog for search in a system that make it possible for users to borrow books that allow the users using the physical library, a digital repository to register, sort, and organize digital materials and make them available in a structured way. We need a specific software to establish interrelationship between authors, their affiliations, and production and subjects. We need a tool that’s supporting interoperability standard was a factor that we used, support standards that enable the exchange of data and metadata across systems. Creation and management vocabularies and metadata models are essential to make the records standardized and indexing. Libraries integration, we need a tool that integrates different platforms into a single search interface. So we choose open source and free software. It was our philosophy. We choose software specialized in collection management that had a large number of users, and that software that many libraries use this tool. So just to briefly mention some of the softwares we have. been using. Here is just a representation of some tools that we use to create and manage all this content. This is a cora, a library system that provides online catalog of the physical collection, lending books and managing all library activities. It’s an open source software that has a large active community of libraries that use it. So it’s a tool that a lot of libraries in the world use this software. This space, it’s a software for creating digital repositories. This space can be used to manage and make digital objects available, including e-books, videos, audio and other documents formats. Many academic institutions around the world widely adopt this space. In Brazil, a lot of universities adopt this space to create digital collections. On this platform, we make the files available and describe the subject and the authors using Dublin Core standard to describe the files. And it also supports interoperability with other systems. Stematris, it’s a web application on creating and managing vocabularies and taxonomies. ViewFind, finally, is a discovery tool that integrates all these databases through a single search interface. So it’s a tool that allows the user to search in the oldest browser in this oldest database. So in addition to our physical library, repository development has advanced recently. So we have the physical library, but the repository is under construction. But we hope, we plan to launch it soon, making all Brazilian IGF materials available in organized and indexed collection. It includes the workshop reports, the videos. It is a strategy to increase the visibility and the impact of workshops and other materials of the Brazilian IGF. This is just a print screen of the collection. The repository interface, it’s under construction. It’s a work in progress. The repository allows to create some collection, subcollection, and, for example, this is a collection of the Brazilian IGF. We list our edition, and each edition has the materials, videos, reports, and presentations. It’s just, it is our last edition of Brazilian IGF, is a record. So this is our COHA interface, is the catalog. The users can search in this box and know what titles and books we have. So the collection has been growing in relation to the physical library, so that we will expand the physical space. In the future, it will be an open space in which the community can not only access our specialized collection, but also receive support to search and retrieve specific information efficiently, and taking full advantage from libraries’ vast collection. So in the future, the idea is to open this physical library for the community. We aim to expand interaction with the Brazilian community. Our collection is quite unique. Researchers, students, and practitioners, among others, will benefit from this library. So we hope that people from civil society, from private sector, can use this library in the future. So some challenges we are facing. For example, the vocabulary construction, the standards to make vocabularies, is the main challenge that we deal every day. Most cataloging processes and software require standardized terms, building controlled vocabularies is a challenge in Internet governance as a whole. In our previous open forum in 2017, this topic was pointed out as a barrier in general for retrieving information in Internet governance. One of the main challenges we face is the availability of collaborative and shared controlled terminology in Internet governance and related subjects. So we are working on vocabulary in the Internet governance area, focus on semantic retrieval of digital materials. Another challenge concerns the open source set of tools we adopt. They are very robust and meet our needs, but they perform equal or even better than private software. However, there is a significant complexity in its implementation, development, and maintenance. But we believe that this open source set of tools are also an important way to integrate our collection with other collections, establish network with other organizations, and spread more efficiently TGI and .br materials. At the same time that we also have access to new and different publications and materials from partner organizations. So just to conclude my presentation, the project, I would like to highlight that the project has a potential for many collaborations. We are open to exchange experience, sharing what we have been learning on building our collection or struggle with these tools. Building an Internet governance vocabulary is also part of the project. Next step, it will become a first pillar of the project. The idea is to create multilingual vocabulary, which allows us to index materials and standardize it in a structured manner. This work requires stakeholder collaboration. Our proposal is also to always be in dialogue with the IGF community. So we believe it’s essential to build a track on collection in this forum. So thank you. Now we are open to comments and questions.

Vinicius W. O. Santos:
Thank you. Thank you, Jean. Well, this was the overall presentation of our project and the actual status of it. I think, but before anything else, let me just correct something that I just forgot to do at the beginning. Let us introduce ourselves. My name is Vinicius. I’m here actually replacing Hartmut Glaser, that guy that you are just reading on the session description. He is the Executive Secretary of CGI.br in Brazil. He couldn’t come here, and I’m replacing him as the moderator of this session.

Jean Carlos Ferreira dos Santos:
We have many colleagues here from NIC.br and CGI.br, from the advisory team and also from specialized departments of NIC.br, departments that produce a lot of knowledge and a lot of materials that are inside of this scope we are just discussing here of how to sort, classify, and spread, and so on. Jean, that just presented the actual status of this project, is the coordinator of this project in NIC.br, CGI.br. We also have Amanda with us. She’s together with him in this project and will be helping us with the good report for this session, for us also to be able to index this report within the set of tools Jean was just proposing. Well, just to pass the floor to any interested person to make questions, I would just like to say that this session is very important for us because we do think that this is a subject that is not very much discussed, and we do believe that this is very important. For example, yesterday we had that main session on the future of digital governance, and we had a member of the library’s ecosystem making a question, and we were just chatting with him after the session, and chatting about these initiatives and other discussions related to libraries and access to knowledge. The library’s coalition is a very important coalition in the history of the IGF. Libraries had a very important role to the Internet throughout its history, mainly in terms of access to the Internet for some time, and now mainly for the access to knowledge, as we know. This is something that we are trying to also integrate in the scope of this discussion we are bringing here to the IGF community. The floor is open for questions. If someone has comments, questions, please feel free to ask for the floor. We have Alexandre, we have Everton, and that’s it. So please, Alexandre, feel free to. Everton. Thank you. Is it?

Audience:
Yeah, it works. Well, good morning. Thank you, Jean, for the presentation. Thank you, Vinicius, Amanda. I just would like to make more a comment than a question. We often see many Brazilians joining Internet governance-related events all over the world, and I just would like to emphasize how important it is to have content available for the audiences that we deal with, and the collections, the cgi.br collections is a great example of that, because it helps so much to qualify the Brazilian audience in order for them to reach out, to participate, to engage in processes, in Internet governance-related processes, in a much higher level than if they just showed up completely new to the environment. So cgi.br collections play a very important role for the community in growing that community qualification for the audience in Brazil. So that’s more a comment than a question.

Vinicius W. O. Santos:
Thank you. Thank you. Alexandre, please.

Audience:
Yes, good morning, everyone, and congratulations, Jean, for this amazing project, and I would say that since the last IGF in Geneva that you have mentioned, you really made a huge progress in systematizing all this documentation, and it is, for me, a good example to be followed by other Internet governance structures. I would say that besides this very good work that you have been leading, this should be the basis for a future, a more humblest project on Internet data archive, which is really very important, not only for researchers and policymakers, but the whole civil society community. So this is a very quick comment, and my question is, since we are dealing with non-structured type of data, documents and publications, have you ever thought of using machine learning algorithms to categorize this type of documentation based on the taxonomy? I’m asking that because at the OECD AI Policy Observatory, they do have algorithms that will, based on that given taxonomy, categorize all the documents related to AI, such as national strategies, or regulatory frameworks, or legal frameworks, and also technical documentations. These would really enhance even more the great potential that the work you have been leading in terms of constructing this database and documentation.

Jean Carlos Ferreira dos Santos:
Thank you, Alexandre. Yes, I think it’s a good question. We are trying to prospect all these tools and good practice on organizing data and materials. We are working now in text and videos, but we have a huge challenge about data. NIC-BI produced a lot of data, but we need tools and standards to use this data more efficiently or extract good insights from this data, and a big challenge is to describe it, preserve it, and make it reusable, apply other standards, specific standards for data to use and share this data. I think it’s our next phase of the project, how to use this data more efficiently. How to collect and preserve and make this data useful.

Vinicius W. O. Santos:
Thank you, Alexandre, for the question. Thank you, Jen, for the answer. Raquel, do you want to get the microphone?

Audience:
Yes. I also want to make a question. For the record, my name is Raquel Gato. I’m also from NIC-BI’s team, but this is an individual capacity question. For Jean and Vinicius, thank you very much, and Amanda, for your work and the presentation. My question is for someone who is listening and is inspired right now to replicate what you were doing and taking into consideration the five years lesson learned that you have, what would be the tips for someone to start this project, if you have two or three takeaways that could help someone to replicate?

Jean Carlos Ferreira dos Santos:
Thank you, Raquel, that’s a very good question. In fact, it’s a long way. We try to apply standards and practice from library studies, information studies field, and library studies is not a field that has a good interface with internet governance, but this field has a lot of tools and standards, open source tools, open source space, and software that can help organizations to organize this huge amount of content that internet governance produces all the time. I think we need dialogue with this community. For example, in Brazil, as I mentioned, it helped us a lot because they had expertise to identify the right tool, the right standard, and some of these tools are for free, but there is a challenge because we need knowledge to understand programming, to code this tool, so we need interface. with the technical community. And I think it’s this, and the other challenge is the internet governance communities produce a lot of books, reports, but we don’t have standardized identifiers, we don’t have, we don’t use codes to, that allows, recover this content, so a lot of things, we lost this content, and so we need to use this digital object identifiers, different ways to recover and preserve this content, I don’t know if I answered your question.

Vinicius W. O. Santos:
Thank you. I would just pass the floor to Alexandre, but just a brief comment, I’m the moderator, but I would also like to make a comment. Just building on what Jean just said, and also from some parts of his presentation, I think there is a word that is very important for many things related to internet governance, and it also applies to this discussion here about information and archiving and so on. Partnerships. We do need partnerships and collaboration. Partnerships and collaboration are the basis for many of our work within internet governance, and it’s not different from this kind of project we are discussing here. If it was not for the partnerships we had, like Jean mentioned, we probably would have had more difficulties and more barriers to be able to implement and to deploy things related to this project, and this is a work in progress, as he also mentioned, and we still need a lot of partnerships and collaboration to move forward. Alexandre, please.

Audience:
Just a curiosity, Jean, because we’ve seen that many organizations, like ISO or even OECD, are working on taxonomies for different specific specialized areas, such as AI, or even ISO is working on a taxonomy for ICT in health, but I haven’t heard yet any type of initiatives to create a taxonomy on internet governance. It’s just a curiosity if you have an idea, if there is anyone working on this type of creation of a taxonomy, and as Vinicius has just said, I strongly believe that this is a collaborative type of initiative, to build a taxonomy based on many stakeholders building this type of taxonomy. Do you have an idea if this exists or not?

Jean Carlos Ferreira dos Santos:
Yeah. As I mentioned, the library studies and information science have this kind of practice, but internet governance, since five years ago, every edition, IGF edition, there is a session or a workshop or DC that try to discuss how to build a vocabulary or taxonomy in internet governance, but I think there is the community struggle, what is this, why we need, how to, because internet governance is a diverse field, so we need a lot of collaboration, a lot of subjects and specialized knowledge, but we need to discuss in IGF, create more open forum, maybe a workshop to try to bring the community from information science, from technical community and other stakeholders to think about how to create a vocabulary, and I think it’s important to, important community think about what is internet governance, it’s an area, it’s a field, so when we think in terms of, it’s a field of knowledge, we need terms, we need concepts, we need to understand the boundaries, to identify what we are, so I think I’m philosophy a little bit. That’s good.

Vinicius W. O. Santos:
Thank you. Yeah. Please.

Audience:
Good morning. My name is Winston Roberts and the first thing I must say is to apologize for my late arrival, I have been unwell, I tried to send a message to Everton, but the phone call didn’t go through, so apologies for that. I’m, because I’m not well, I don’t want to speak a lot, but on the other hand, I am tempted to speak a lot because I should, because I am here on behalf of the International Federation of Library Associations, IFLA, and we are heavily involved in internet governance in the IGF process as one of the multi-stakeholder communities. I am not prepared for this session because I had not really planned to attend it, however, thanks to Everton’s invitation, I have come along and I seem to have arrived just in the middle of a very interesting discussion about libraries. So this is an interesting coincidence, but the question is, what do you mean by libraries? The library sector is like any global professional sector. It has regions and sub-regions and categories and types, it has technical standards for all its different types of operations, and we, when we talk about technical standards, we tend to mean standards for performance, standards for service delivery, we also mean standards for processes within libraries, but that’s not of great concern to you, I think. One area of standards that does concern us is coding, but I’m not an expert in that so I won’t comment further. I think the main thing to say is that we do have, I disagree with one previous speaker who said there is not really an interface between your sector and libraries. I don’t agree with that because we cannot deliver our services without using the internet. We cannot possibly do that because all our services depend on certain applications of the internet and the transmission of our information services depends on the internet. We don’t use physical transport so much anymore, we use the internet. And, you know, if you regard a library as a building with books, then it cannot disseminate its information without using the internet. On the other hand, a library is not anymore just a building with books. The library is a motor for generating information and disseminating information and it has to use the internet and therefore it has to use, excuse me, the libraries use the internet as a platform. We provide content services to, we put content on the internet, we deliver, we mediate between the internet service providers and our users and we provide information to our users. We help them understand the purpose of the internet, we help them understand, evaluate information they find on the net, we help them develop digital literacy and information literacy, which means learning how to understand the truth or untruth of the information they find on the net, how not to be fooled and how to use it constructively. We have a lot of things in common with the technical community. One of those is the inclusion of all sectors of society in the information ecosystems that we have today. Inclusion of all, regardless of whether they are men, women, children, regardless of their beliefs, their religions, their race, their anything, we do not discriminate. Information services are support for democracy and we educate our children using these information services, but also in education. Internet is used in schools, in school libraries, not just in textbooks but in information services which are online in classrooms, at least in many countries. So I feel I am turning into a professor in this comment. Sorry, I didn’t mean to come along and talk like an academic. What I suggest you do is if you want to know more about IFLA, you look at our website, which is www.ifla.org. It is based in the Netherlands. It has a secretariat in the Netherlands. My microphone is not working. Two of our senior people from headquarters from the secretariat are here at the IGF. One of them is Maria de Bradefeuille. She is Mexicana, but she speaks many languages. I will give you her name. Not that. I will give you her email to the secretary or somebody if you like. You could email her to ask her details. You could also email our policy director, Steven Weiber, whose name you will find on the Internet as well. You could email him to ask about our policies on the Internet, but particularly look at information on our website about the Internet Manifesto, which we are developing. We published our Internet Manifesto 10 years ago. It is being updated now because of all the new technical developments. Things go very fast, as you know, and a 10-year-old manifesto is very out of date. Remember that we are developing a manifesto for libraries, not for you, so we are trying to tell our members what the Internet is, how it is important for them and their societies. It is also important for you to understand how we use the Internet services that you are developing. The two sectors have a strong interface intellectually and politically in terms of policy. We have a regional committee for Latin America and the Caribbean, IFLA-LAC. You will find information. I do not want to give you names and addresses and emails now in this session, but look on our website and you will find the details of our regional committee and the members, the chair and secretary and the regional office. I think you should contact them and ask them all the questions you would like. If they do not know the answers, they will refer them to headquarters, but we are heavily involved in the policy process with the multi-stakeholder community. We support this IGF process. We have been doing this since it started in 2005. I have said enough. I am going to start coughing if I keep talking, so apologies again. If you want to ask me questions after the session, feel free to come and approach me. Thank you.

Vinicius W. O. Santos:
Thank you, Winston. Thank you for your intervention. Don’t worry, we were actually talking about libraries, really, and you just arrived at a very good moment. Well, you could not be in the first part of the session, but the session was about presenting our project in terms of how it is in terms of status and also discussing some of its challenges in terms of developing, advancing, but also in terms of collaboration and partnership, as we were just discussing when you arrived. So your intervention regarding EFLUT, that is something we try to follow as well within the ecosystem and also the dynamic coalition on libraries. All of this, we are actually following the developments as well, so all of this is very connected, so thank you very much for your intervention. Diogo, you had a question or a comment?

Audience:
Hello? Yeah, it’s working now? Okay, good morning. It’s not a question, it’s a comment or an idea, because Alexander mentioned about taxonomy and so on, and also machine learning algorithms, and we see, I mean, we are in a new way since the beginning of the year with large language models, and I do think you could explore large language models on this process, because we see many research from the technical point of view, possibilities of using large language models to extract taxonomies or try to identify. Of course, it’s not, will be the final taxonomy, but could help to extract insights, and it’s like a paradox, because a large language model hallucinates, because it does not have a curated knowledge there, but it’s also being used as a tool to extract taxonomies and then go to a process of human creating these and using again in new algorithms. So, it’s just an idea that you can push in the future with partners, and I mean, we can be a partner for this. So, thank you and congratulations for the project and for the presentation.

Vinicius W. O. Santos:
Thank you, Diogo, for your question and for your proposal of partnership. We will charge you anytime. But, well, I think we are actually arriving and reaching the time limit we have for this session. It’s almost 9.30 is the limit we have. Do we have any final comment or question? If not, I’m passing the word to Jean for Jean to say some final words, and then we can just close the session. Okay, thank you.

Jean Carlos Ferreira dos Santos:
Thank you, Vinicius. Just a few comments. Just to say thank you. I think we got a lot of insights for our next step, and just emphasize that we try to build a library that is just more than books. We try to attend all the necessities around the information on internet governance to enable the Brazilian community and others to participate and build internet governance, and we are open to collaborations and contributions. So, I think it’s this. So, thank you very much, and we can talk after the session if someone has some question. Thank you. Thank you, everyone. The session is closed. See you in other sessions, in the IGF and other moments. Thank you. Thank you very much. Arigato gozaimashita.

Audience

Speech speed

134 words per minute

Speech length

1900 words

Speech time

848 secs

Jean Carlos Ferreira dos Santos

Speech speed

106 words per minute

Speech length

3219 words

Speech time

1821 secs

Vinicius W. O. Santos

Speech speed

133 words per minute

Speech length

1369 words

Speech time

618 secs

Building Diplomatic Networks for a Safe, Secure Cyberspace | IGF 2023 Open Forum #140

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Pablo

The analysis raises awareness about the importance of cybersecurity and provides several key points to support this notion. One of the main insights is the need to convince authorities and agencies about the significance of cybersecurity. It acknowledges that countries often confront complex and pressing issues which may cause cybersecurity to be overlooked. Therefore, there is a requirement to advocate for cybersecurity to be prioritized by governments and administrations.

Another crucial aspect discussed is capacity building. The analysis emphasizes the necessity of developing expertise at the national level in order to effectively address cybersecurity issues. Without this capacity building, countries will struggle to tackle the rapidly evolving challenges posed by cyber threats.

Partnerships with stakeholders are also stressed as vital factors in cybersecurity. The analysis highlights the importance of engaging with stakeholders such as the private sector, academia, and civil society. These collaborations are seen as crucial at both national and international levels. Governments need to recognize and appreciate the relevance of including stakeholders in the decision-making processes pertaining to cybersecurity.

The analysis also takes a positive stance towards governmental and international collaboration. It stresses the importance of partnerships with stakeholders and the necessity of prioritizing cybersecurity for governments and administrations. The supporting evidence for this stance includes communication about the significance of partnerships and the need to prioritize cybersecurity.

Additionally, the analysis advocates for capacity building as a means to effectively address cybersecurity challenges. It emphasizes the need for expertise at the national level and identifies capacity building as strategically important for combatting cybersecurity issues.

In conclusion, the analysis underscores the importance of cybersecurity and highlights the need to convince authorities and agencies about its significance. It emphasizes the necessity of capacity building and partnerships with stakeholders, and supports governmental and international collaboration. These insights provide a comprehensive perspective on the importance of cybersecurity, the strategies required to address it, and the key stakeholders who should be involved.

Audience

The discussion focused on multiple key topics related to digital policy and cybersecurity. The participants highlighted the significance of emerging technology and artificial intelligence (AI) in shaping digital policy and diplomacy. They acknowledged the challenges faced by diplomats and policymakers in adapting policy and legal frameworks to handle these innovations. The emergence of new technologies and AI presents opportunities for enhancing digital policy and diplomacy, as well as addressing global challenges.

Public-private partnerships were identified as crucial in the field of cybersecurity. Cooperation with the private sector was seen as bridging the gap between technical expertise and technological resources. The involvement of the private sector in the implementation of policies was considered valuable. It was noted that public-private partnerships provide an opportunity for private sectors to contribute their knowledge and resources towards addressing cybersecurity threats effectively.

Participants stressed that cybersecurity is a transnational issue that cannot be handled by a single nation alone. International cooperation was identified as paramount in mitigating cyber threats. The interconnected nature of cyber threats necessitates collaboration and information sharing among nations. It was highlighted that effective cybersecurity measures require collective efforts and coordination.

The importance of multilateralism and multi-stakeholderism in tackling digital and cybersecurity issues was advocated. Participants expressed a need for a collaborative approach involving multiple stakeholders, including governments, international organizations, civil society, and the private sector. It was argued that engaging different stakeholders can lead to more comprehensive and effective solutions to cybersecurity challenges.

The discussion also drew attention to the digital divide between the global north and south. Concerns were raised about the disparities in digital access, infrastructure, and skills between developed and developing nations. Participants emphasized the need for international cooperation to address this divide. They called for increased capacity building initiatives and the role of digital ambassadors in developing countries to enhance digital literacy and bridge the gap.

The introduction of new internet standards called zero trust, announced by the United States, was seen as a positive development in enhancing internet security. The concept of zero trust re-architects how the internet works, placing security at its core. It was highlighted that everyone owning their own IP address based on public-private key pairs can contribute to a more secure internet ecosystem.

The importance of reflecting the lessons learned from programs and initiatives in policies was emphasized. Participants encouraged ambassadors to apply their knowledge of internet security to their respective ministries and governments. They stressed the need for policy changes to incorporate the insights gained from addressing cybersecurity challenges.

Estonia was recognized as a small but influential country in the field of technology and cybersecurity. It was noted that Estonia’s national leadership, investment, and focus have resulted in the development of world-defining expertise in these areas. The relatively low capital expenditure required for technology development in Estonia was also mentioned.

The impact of online threats on development efforts, particularly in small island nations like Jamaica, was discussed. Participants acknowledged that online threats intersect with various developmental concerns. They emphasized the need to address online threats as they can undermine progress in areas such as industry, innovation, infrastructure, peace, justice, and strong institutions.

Participants also highlighted the importance of prioritizing issues wisely in countries with limited resources. They acknowledged that different priorities often compete with each other in small states. Effective decision-making and resource management were seen as key factors in maximizing the impact of limited resources.

Overall, the discussion shed light on the complex and interconnected nature of digital policy and cybersecurity issues. It emphasized the importance of collaboration, multilateralism, and multi-stakeholderism in addressing these challenges. The need for bridging the digital divide, enhancing international cooperation, and prioritizing resources wisely were key takeaways from the discussion.

Garima Vatla

During a series of discussions on cybersecurity and digital issues, the importance of the human component was highlighted. It was observed that this aspect is often overlooked in these conversations, despite its significance. Participants stressed the need to consider how individuals interact with and are impacted by technology. Empowering individuals with knowledge to understand these issues effectively emerged as a crucial factor.

Another key finding from the discussions was the nuanced nature of technology. While it presents numerous opportunities, it also poses significant threats in the form of cybersecurity issues. This highlights the need for a balanced approach in addressing these challenges and maximizing the benefits of technology.

One area that requires increased understanding and clarity is the definitions of cyber and digital diplomacy. Participants noted a lack of consensus and confusion surrounding the terminology and scope of these concepts. It is important to address this confusion to facilitate effective communication and collaboration in the field of cyber and digital diplomacy.

The discussions also emphasized the significance of integrating digital and cyber issues within the broader context of global policy. It was highlighted that these issues should not be viewed as separate entities but rather as integral parts of the overall global policy landscape. Recognizing and integrating these components into policy-making processes is essential for effectively addressing the challenges posed by digital and cyber issues on a global scale.

To summarize, the discussions underscored the importance of considering the human component in cybersecurity and digital issues. Empowering individuals with knowledge, clarifying definitions in the field of cyber and digital diplomacy, and integrating digital and cyber issues within global policy frameworks are crucial for effectively tackling the challenges and opportunities presented by technology in the digital age.

Hideo Ishizuki

Japan is taking significant steps to strengthen its cybersecurity measures. The National Police Agency has established a Cyber Affairs Bureau and a National Cyber Unit to address cyber threats effectively. The Ministry of Defense is also committed to increasing the number of personnel in cyber-specialised units to enhance their ability to respond to cyber attacks.

Efforts are being made to introduce active cyber defense to eliminate the possibility of severe cyber attacks. This proactive approach aims to detect and counter cyber threats before they can cause significant damage. Additionally, there is a focus on enhancing public-private collaboration and reforming the government structure for better coordination in dealing with cyber threats.

The increase in cyber threats has shifted responsibility towards the Ministry of Foreign Affairs. They are tasked with international cooperation for information gathering, analysis, and the formulation of international frameworks. The geopolitical competition, including incidents such as Russia’s war on Ukraine, has heightened the risks of cyberattacks against critical infrastructure.

To build up cybersecurity capacity, regional mechanisms and worldwide efforts led by organisations like the World Bank are proving to be effective. The ASEAN-Japan Capacity Building Centre, established five years ago in Bangkok, has trained over 1,000 individuals from ASEAN member states. Furthermore, the World Bank has set up a fund called the multi-donor capacity building trust fund and multi-donor cybersecurity trust fund to support global cybersecurity capacity building initiatives.

However, a common issue is a lack of resources dedicated to cybersecurity. The Ministry of Foreign Affairs in Japan suffers from a staff shortage in dealing with cybersecurity issues. Leadership does not adequately prioritize the importance of cybersecurity, which further affects resource allocation in this area.

To address this issue, it is crucial to highlight the importance of investment in cybersecurity. Efforts are being made to demonstrate the return on investment in cybersecurity measures. However, one challenge is that the impact of these efforts is not easily visible until a cyber attack occurs.

The Ministry of Foreign Affairs plays a crucial role in collecting and disseminating threat information and incident cases from abroad. This information is essential for government agencies to protect themselves from threats effectively. The dissemination of this information within the government is vital to ensure a coordinated response to cyber threats.

International participation by law enforcement agencies in countermeasures is also crucial. Japan is actively involved in the United States’ counter-ransom initiative, which has more than 40 countries participating. Such international cooperation helps generate interest and investment in cybersecurity.

In conclusion, Japan is committed to strengthening its cybersecurity measures through the establishment of specialized bureaus, increasing personnel, introducing active defense measures, and enhancing public-private collaboration. While regional mechanisms and global efforts are proving to be effective, a lack of dedicated resources poses challenges. Nonetheless, the Ministry of Foreign Affairs plays a significant role in collecting and disseminating threat information, and international cooperation in law enforcement agencies is essential for effective cybersecurity.

Nathaniel Fick

The extended summary focuses on the importance of technology and cybersecurity in building diplomatic networks and creating a safe cyberspace. It emphasizes that success in these areas requires the involvement of people, processes, and technology, with the human element being particularly significant. By building connections and strengthening diplomatic networks, countries can work together to address cybersecurity challenges effectively.

The need for a global network of trusted counterparts is emphasized as essential for responding to emergency situations. When a crisis occurs, being able to rely on a trusted counterpart to provide assistance and support is crucial. Developing a framework for responsible state behavior in cyberspace can contribute to the creation of such networks.

Mainstreaming technology diplomacy globally is seen as of great importance. This entails building diplomatic networks to ensure a safe and secure cyberspace. Ambassadors Ishizuki Hideo and Regina Greenberger are lauded as pioneers in this field, showcasing the impact of technology diplomacy.

The significance of achieving basic connectivity for the unconnected is emphasized. With around 2.8 billion people still lacking access to basic connectivity, it is essential to prioritize efforts to bridge the digital divide. Without basic connectivity, individuals are unable to participate in the advantages offered by emerging technologies, further exacerbating inequalities.

Capacity building is discussed as a challenge for the State Department and not limited to developing nations. It is recognized that building capacity within organizations is applicable to everyone.

The positive stance towards the priority of achieving basic connectivity for the unconnected is reiterated. By ensuring universal access to basic connectivity, more individuals will have opportunities to benefit from emerging technologies.

The importance of collaboration and open-mindedness in decision making is highlighted. It is crucial to assume good intentions and be open to considering other points of view. The ability to hold opposing ideas simultaneously is deemed a mark of intelligence.

A fresh perspective is seen as beneficial, particularly amongst those new to government bureaucracy. Nathaniel Fick, who has been in government bureaucracy for little more than a year, is mentioned as an example.

The importance of building digital and cyber skills within countries, regardless of their size, is underlined. Estonia’s success in cybersecurity despite being a small country is commended, highlighting the notion that any country can develop world-defining expertise with focus and discipline.

Cybersecurity is regarded as a cost, with efforts focusing on avoiding negative consequences. The prioritization of cybersecurity by leaders is highlighted, with a call to emphasize the positive aspects and opportunities associated with the digital shift. The integration of digital and cyber issues into global policy matters is recommended.

The conclusion of the Jellix Fellows program is acknowledged, expressing gratitude towards Regina and Hideo for their contributions and partnership. Anticipation for ongoing collaboration with the fellows in the future is expressed, and the fellows are looked upon to set the tone for future classes.

In summary, the extended summary emphasizes the significance of technology, cybersecurity, collaboration, and the acquisition of digital and cyber skills in building diplomatic networks, ensuring a safe cyberspace, and addressing emergency situations effectively. It highlights observations such as the need for a global network of trusted counterparts, the role of pioneers in technology diplomacy, and the importance of open-mindedness and fresh perspectives.

Regine Grienberger

The discussion highlighted the significance of cyber diplomacy and its cross-cutting approach. It was argued that cyber diplomacy requires a holistic, governmental approach, involving coordination among different departments and ministries to address cybersecurity challenges. It was emphasized that foreign ministries, military, and various agencies are key players in cyber diplomacy.

Additionally, the role of career diplomats in cyber diplomacy was discussed, with an emphasis on their ability to bring a generalist approach and connect commonalities among security policies. The relevance of the diplomatic toolbox in cyber diplomacy was also highlighted.

The speakers stressed the need for an entrepreneurial spirit and confidence in cyber diplomacy, as it is a new concept that requires promotion and adaptability. Diplomats in this field may need to take risks and be comfortable with uncertainty.

The discussion also acknowledged the relevance of cybersecurity to national and international security, with the potential for cybersecurity concerns to turn into national security threats.

The interlinkage between digital transformation and cybersecurity was emphasized, with a suggestion to focus on opportunities rather than just risks. Both aspects were seen as interconnected and requiring attention.

Investment in cybersecurity capacity building was discussed, noting the indirect rewards of programs that assist law enforcement in tackling cybercrime in other countries. The need for international collaboration in addressing cyber threats was highlighted.

Lastly, the importance of increasing international competencies in dealing with cybersecurity across all government players was emphasized, pointing to the formulation and implementation of a national cybersecurity strategy.

Overall, the discussions provided insights into the nature of cyber diplomacy and the various factors that need to be considered for effective implementation. These insights are valuable for policymakers and stakeholders involved in cybersecurity and diplomacy efforts.

Sharif

Collaboration between technology and policy discussions plays a critical role in bridging the gap, as highlighted by various speakers at the conference. They emphasized the necessity of collaboration in achieving SDG 9 (Industry, Innovation, and Infrastructure) and SDG 17 (Partnerships for the Goals). The dialogue from the United Nations Global Goals (UNGG), the Expert Working Group, and the Collaborative Research Initiative (CRI) all stressed the importance of collaboration.

To further facilitate these discussions, it has been suggested that specific bodies responsible for maintaining discussions be established. Noteworthy examples include the National Cyber Security Coordination Centre in Nigeria and the National Authority for Cyber Security in Albania. By having dedicated agencies, countries can ensure effective communication and cooperation between various stakeholders in the technology and policy realms.

Foreign programs, such as the GLX programme, have also been recognised as valuable resources in understanding the interplay between technology and policy entities. These programmes offer insights into the complex dynamics and interactions between different sectors, providing additional perspective on how to bridge the gap effectively.

Overall, the sentiment from the conference was positive regarding the importance of collaboration between technology and policy discussions. It is evident that it is crucial to work together to address the challenges highlighted by SDG 9 and SDG 17. By embracing collaboration and establishing dedicated bodies responsible for maintaining discussions, stakeholders can foster meaningful dialogue, overcome barriers, and collectively work towards achieving the goals outlined by the United Nations.

Maritza Ristiska

The GEL-X network is a highly regarded asset in the realm of cyberspace security. Comprising dedicated diplomats and experts from around the world, this network is pivotal in advancing international cooperation, building trust, and bolstering resilience to cyber threats. By facilitating collaboration among nations, the GEL-X network plays a crucial role in addressing the global challenge of cybersecurity.

One key argument in support of the network is its ability to enhance cooperation on a global scale. This is achieved through its experience within the OEWG on ICT and the UN Ad Hoc Committee on Cybercrime. By serving as a platform for the timely sharing of information related to cyber threats, the network promotes proactive and coordinated responses to emerging challenges in the cyber domain. This exchange of information is instrumental in enabling nations to stay abreast of evolving threats and develop effective countermeasures.

Moreover, the network’s impact lies not only in its ability to facilitate cooperation but also in its potential as a coordination hub during and after cyber incidents. During large-scale cyber attacks, the network enables swift communication and coordination among nations. This real-time collaboration is essential for mounting effective responses to mitigate the damage caused by cyber threats. Furthermore, the network provides technical forensic evidence, aiding in the attribution process during cyber incidents. This attribution capability is crucial in holding responsible parties accountable for their actions and deterring future cyber attacks.

Notably, the GEL-X network’s efforts align with the overarching goal of promoting responsible state behavior in cyberspace. By advancing international cooperation, sharing information, and enabling swift coordination, the network contributes to establishing a more secure and stable cyberspace environment. Responsible state behavior is critical in maintaining peace, justice, and strong institutions, as well as fostering continued innovation and infrastructure development.

In conclusion, the GEL-X network is an invaluable asset in cyberspace security. Its role in advancing international cooperation, enhancing coordination during cyber incidents, and promoting responsible state behavior makes it instrumental in addressing the challenges of cybersecurity. As the landscape of cyberspace continues to evolve, the GEL-X network’s contributions will play a pivotal role in securing the digital domain and safeguarding global interests.

Sumiya

The analysis delved into the importance of understanding cyber landscapes and cyber diplomacy from three different perspectives. Firstly, one perspective highlighted the crucial nature of comprehending a country’s cyber landscape. The argument presented was that in order to navigate the complexities and challenges of the ever-evolving cyber realm, it is imperative for countries to have a deep understanding of their own cyber landscape. This involves understanding the various agencies and entities involved, as well as recognising the role of foreign ministries in facilitating collaboration between public and private entities.

The second perspective explored the benefits of learning from the U.S. cyber diplomacy and cyberspace. It was noted that the U.S. State Department organizes a program that provides insight into U.S. cyberspace, thereby enhancing the participants’ ability to understand their own ecosystem. By studying the approaches and experiences of the United States in the realm of cyber diplomacy, countries can gain valuable knowledge and apply best practices to their own contexts.

The third perspective advocated for supporting collaboration and understanding in cyber diplomacy. The discussions stressed the importance of foreign ministries in fostering partnerships and collaboration between different stakeholders. The analysis emphasized the need to continue such collaboration and understanding in order to address the complex challenges of cyberspace effectively. By working together, countries can create a more secure and resilient cyber environment that promotes peace, justice, and strong institutions.

In conclusion, the analysis highlighted the significance of understanding cyber landscapes and cyber diplomacy from multiple angles. It emphasized the role of foreign ministries in fostering partnerships and collaboration, and the benefits of learning from the experiences of other countries, such as the United States. A comprehensive understanding of cyber landscapes and effective cyber diplomacy is crucial in today’s interconnected world to ensure the security and stability of cyberspace.

Christopher Tate

The United States has introduced new standards called zero trust to enhance internet security. These standards enable the re-architecting of the internet core, providing improved protection against cyber threats, and ensuring the confidentiality, integrity, and availability of information. This proactive approach aims to mitigate potential risks and addresses the growing concern for internet security.

Connect Free has unveiled a revolutionary concept where individuals can own their IP addresses. This concept is based on a public-private key pair, ensuring a secure and unique identification for each user. Connect Free’s aim is to promote internet accessibility and reduce inequalities in terms of internet connectivity.

Recognizing the complexity of the internet, there is an acknowledgment of the challenges faced by diplomats. Christopher Tate, an IT expert, has apologized on behalf of the IT community for making diplomats’ jobs more difficult due to the intricate nature of the internet. This recognition highlights the need for collaboration between the technical and diplomatic sides.

However, there is a belief in the potential of productive collaboration between the technical and diplomatic realms in addressing internet security and accessibility. By bridging the gap between technical expertise and diplomatic efforts, effective strategies and solutions can be developed to tackle the complex issues related to internet security and accessibility.

In conclusion, the introduction of zero trust standards by the United States and the concept of individual IP address ownership by Connect Free are significant advancements in the field of internet security and accessibility. Despite the challenges posed by the complexity of the internet, there is optimism and appreciation for collaboration between the technical and diplomatic sides to overcome these challenges. This cooperation is crucial in ensuring a secure and inclusive internet for all.

Session transcript

Nathaniel Fick:
All right, let’s begin. Welcome, everyone, and good morning. There we go. That’s the problem with putting me in charge. Welcome, everyone, and good morning. My name’s Nate Fick. I’m the US Ambassador for Cyberspace and Digital Policy. And it’s a thrill to be here this morning with our class of global emerging leaders in international cyberspace security, JELLX for short. I had the great privilege of being with this group at the beginning of their fellowship year at the RSA conference in San Francisco. And we pledged then that we would be together again here in Kyoto at the Internet Governance Forum. And it’s such a pleasure to see it all actually come together that way. We just had a bit of a graduation ceremony for the fellows in the other room a few minutes ago. They are diplomats and government experts from 20 countries, almost literally every corner of the world. And what they have in common is a commitment to the importance of technology issues and cybersecurity and foreign policy, an understanding that these issues are becoming more important and more central, and I think a visceral appreciation that this is, in fact, a team sport, and none of us can do it alone. I am pleased, honored, to be sitting between two of my colleagues who help ensure every day that I don’t have to do this alone, representing our host government here in Japan. To my left is Ambassador Ishizuki Hideo. Ishizuki-san is ambassador for international security and cyber policy in the Ministry of Foreign Affairs of Japan. And a wonderful colleague, one of the pioneers in technology diplomacy, and someone who welcomed me to the fold when I was appointed to this role a year ago. And to my right is Ambassador Regina Greenberger, ambassador for cyber foreign policy and cybersecurity in the Federal Foreign Office of Germany, also a pioneer, someone who has been at work mainstreaming technology diplomacy around the world, and another in this community who welcomed me to the fold when I took the job. The title of this session is Building Diplomatic Networks for a Safe, Secure Cyberspace. And it drives home, I think, the point that success in these areas is really about people, process, and technology in that order. We often want to default right to the technical answer. But nor, I think, as you said at the beginning of our session a little while ago, we can’t forget the human element. And in fact, the human element matters more than the others. So this gathering was intended to build connections in order to strengthen our diplomatic networks in the service of a safe, secure cyberspace, and to create champions, really, for the power of the framework for responsible state behavior in cyberspace, and to create global networks of people who, when the proverbial bad thing happens, can pick up the telephone at 3 o’clock in the morning and have a trusted counterpart on the other end of the line who can help them solve a problem. So in the discussion this morning, we are going to hear from several of the fellows. We will hear from the ambassadors to my left and right. We’ll turn this into a conversation around the table. And I just want to say at the outset, again, thank you all for committing your time and energy to this fellowship over the past year. And this really is only the beginning. We look forward to working with you for years, and hopefully decades to come. So having said all of that, our first question is for Siriprapa of Thailand, who is sitting. There we go. So I would love to hear, just quickly, insights that you gleaned from the program, and three or four that matter most to you as a diplomat engaged in these issues around the world.

Audience:
So thank you, Ambassadors Dionis to meet you again, Ambassador Fick, and also Ambassador Hideo Ishizaki and Ambassador Eugene Greenberger. So I’m Siriprapa from the National Security Council of Thailand. I’m here with a very diverse cohort. And in my group, to be honest, we discussed this question prior to today’s session because we need a more inclusive answer. So we have here a representative from Dominican Republic, Estonia, and Poland, and also Indonesia in my group. The first point that we gleaned from the program is especially the emerging technology and AI. So for us, diplomat and policymaker, we feel that it is so challenging that we need to be more agile in adapting policy and also the legal framework of our country to handle this situation of the innovation. The second one is maybe the most cliche word that you hear along the IGF. It’s about the public-private partnership or the multi-stakeholder. So the cooperation with the private sector help bridging the gap between us about a technical expertise and also the technological resources. But not only us gaining from the private sector, the private sector also got a chance, the opportunity, to be in the session and help us implement the right policy for them as well. And the last point is about the relationship between geopolitics and cybersecurity. So it’s undeniable that cybersecurity, cyber threat is a transnational issue. So one single nation cannot handle this kind of threat alone. So we need the cooperation, the international cooperation. So in a nutshell, it is cliche that you may hear the word multilateralism, multi-stakeholderism. But it is the must that we need to go to that direction. So if you want to move on to the next cliche, so let’s do it right now. Thank you.

Nathaniel Fick:
All right, thank you. Next question is for Maritza. And this is fairly straightforward. In what practical, concrete ways do you think this network will be useful in your work?

Maritza Ristiska:
Thank you, Mr. Ambassador. Good morning to everyone. Good morning to the English ambassadors. I’m Maritza Ristiska from the Ministry of Foreign Affairs of North Macedonia. I will speak on behalf of my colleagues from Georgia, Dominican Republic, and Jamaica. We believe that this network is a valuable asset in the complex and rapidly evolving landscape of cyberspace security. Our network is composed of dedicated diplomats and experts from all around the world, who should play a very important role in advancing international cooperation, fostering understanding, building trust, and strengthening resilience to the wide spectrum of cyber threats. Our network could be affected in several key ways. It can facilitate and enhance cooperation, both multilaterally and bilaterally, with an international organization, such as UN. For example, we already have experience within the OEWG on ICT and the UN Ad Hoc Committee on Cybercrime. It can also facilitate and enhance cooperation, both multilaterally and bilaterally, within the framework of regional organization initiatives and platforms. For example, we already have cooperation within OEC, European OEC, and the American region OS60. Furthermore, it can serve as a platform for the exchange of policy ideas and approaches to cybersecurity issues, both at international and national levels. It could serve as a platform for timely sharing of information, insights, and best practices related to cyber threats and vulnerabilities. In case of massive cyber attacks against one of our countries, the network can enable swift communication and coordination among our nations. To effectively address a large-scale cyber attacks against a single state, a coordinated and multifaceted international approach is of immense, immense importance in order to ensure accountability and provide information and technical forensic evidence in order to facilitate attribution. In this context, our network can serve as a hub for coordination during and after cyber incident. In conclusion, I would say that GEL-X network will improve the understanding among countries so we can advance responsible state behavior in cyber to ensure the world is a safer place to facilitate and national development goals.

Nathaniel Fick:
Thank you, Maritza. Next question is for Sharif. And I think we’ve said that all of us who are technology or cyber diplomats, to some extent, are bridges, bridges between our mainstream ministries and the technical community. And there’s a saying that bridges get walked on. So Sharif, what’s your recommendation to avoid the technology and policy conversations being separate? How do you bridge these communities?

Sharif:
Thank you very much. So I’m sure you can hear it. So yeah, like, you know, there were a lot of discussions in that area that we’ve been having. Thank you. She just talked about collaboration, partnership. We’ve been hearing that almost throughout the conference today. And, you know, through discussions coming out of the that came out from UNGG, the EWG, CRI, it has been collaboration, collaboration, collaboration. And our experience is also, like, so to keep these discussions, you know, like together instead of separate, we feel like, for example, from the Philippines, they used to do some cross-functional team building. They do meetings that bring these people together to have these discussions. And we also think there should be specific agencies that are responsible for keeping people together. Like in Nigeria, for example, we are just setting up the National Cyber Security Coordination Center. There’s the National Agency for Cyber Security in Albania. National, what’s it called again? National Authority for Cyber Security. So the National. National Authority for Cyber Security. National Authority for Cyber Security in Albania. That this team, this, it holds, you can hold someone responsible for those discussions. And like the National Authority for Cyber Security in Albania, for example, they are also working on a communication to help delineate these responsibilities and to have a collective front when having these discussions. And one last thing I’d like to also say, like the discussions, to take in these discussions outside the shores of our countries, programs like GLX, like the one we are in, can really help bring people together and, like, build a collective knowledge of understanding the interplay between the two responsibilities, the two groups, rather. Thank you very much.

Nathaniel Fick:
Thank you, Sharif. And I think another reality in this work is that each of our countries is at a different stage of maturity in dealing with cybersecurity issues. And nonetheless, there are commonalities that cut across every level of the capacity building spectrum. So Pablo, what are some of these commonalities that you and colleagues face when it comes to promoting international cyberspace security?

Pablo:
Thank you, Ambassador, for the question. Also, thanks to Ambassador Shisuke, Ambassador Greenberger, met you before. And also, I’m going to speak on behalf of my lovely group from Ecuador, Malaysia, and South Africa, acquired the virus group, by the way. And I have to agree, again, was not to say that people, actually, who makes a difference in the process to promoting cybersecurity, and especially you and the governor, no matter if you’re a minister for foreign affairs and other agencies. Maybe I want to highlight three elements. The first one is probably maybe one of the most important ones. How you make cybersecurity the top priority for your governments or different administrations? Because our country, our state, has to face and deal with different threats. They are very complex, urgent sometimes. And sometimes cybersecurity probably is not the top priority. And you have to do a lot, I mean, in terms to convince your own authorities, other agencies, that this is important. And as an older boss sometimes used to tell me, Pablo, urgent things come first and important things. But in trying to make this something urgent, important, permanently, is a daily effort. And sometimes it’s frustrating, sometimes quite exciting. But it’s something you must do, definitely. The second point I would say is something which is quite important in terms of cybersecurity today. We were discussing a lot during this forum. It’s about the capacity building. Has to really has to work a lot in terms to how you create your expertise at national levels, get training on financial assistance. This is something that I would say all the state, but especially in my group, I mean, this is something quite critical. And I would say strategic, important, because with no capacity building, there’s no way to face the problem you have right now in cyberspace. And that is also permanent tax. It’s very important. And that means resources, which is not, I mean, too many resources we can get. So that’s also very important to highlight. And the last one is something that this is all about. I mean, in this photo, it’s about your work and your partnership with stakeholders. This is something that, of course, we, I mean, we’re trying, I mean, to do more, but in terms to your work with the private sector, the academia, civil society, it’s trying to also to understand and to make everyone can understand that those stakeholders are really important and relevant to include in your own process. Because sometimes governments say, well, this is just a government issue. We need to, I mean, to engage, but you have to tell they are important, not just at the national level, but also the international discussion area. So I would say these three elements is something we have as a permanent task regarding our group. Thank you.

Nathaniel Fick:
Thank you. My last question for the fellows before we turn to the ambassadors is for Sumiya. And this is an annual program, although this is the first class. What advice do you have for those who will come behind you about how to get the most out of it?

Sumiya:
Thank you very much, Excellency, and good morning to you all. I am speaking in behalf of my very wonderful colleagues from Panama, Jordan, and from Costa Rica, and I’m from Bangladesh. We are very proud to be the first cohort because we have all sorts of things that we have learned new. So we believe that our advice to the future cohorts as mentors would be very effective. In this journey, it would be vital to have a very comprehensive understanding of their own country’s cyber landscape, including the various agencies or entities are involved with their unique roles in the global context. In that line, multistakeholderism comes in because it’s so essential in this venture, and we believe that the foreign ministries will play a very important collaborative role to bridge between the public and the private entities. Next, because we are grateful that this program has been arranged by the U.S. State Department, which gives us, actually, it has given us the way to look into the U.S. cyberspace, how you have dealt with this cyber diplomacy, and how the country, United States, has evolutionalized their venture into this space. And this will give the future cohorts also an insight into this, and the various stakeholders and counterparts. This interaction will enhance their ability to understand their own ecosystem. So we would encourage them to further do so. And all this, summarizing all this, it brings us the past cohorts, which is us now, and the future cohorts very closer to achieving what we had actually set out to achieve in the first place when we began in May, and when, if I may quote our ambassador, this will be an opportunity to turn to a friend, an ally, or a fellow when we are having a bad day, or a good day. Thank you very much.

Nathaniel Fick:
Thank you very much for that. And now, for Ambassador Ishizuki, your government’s commitment to international leadership on cybersecurity and technology issues is clear. It’s demonstrated by hosting 9,000 attendees in person and virtually at this IGF. How is Japan organized around cybersecurity at the government level, and what’s changing as the MFA now continues to elevate these topics in Japan’s foreign policy? Thank you.

Hideo Ishizuki:
Thank you, Ambassador. Good morning, everybody. And I’m quite honored to be here to participate this forum. And first of all, on behalf of the officials of the host country, I would like to extend a warm welcome to all of you who are here today. And maybe, I think there is also someone who is attending online, I think. And for those who are lucky to be here in Kyoto, please enjoy the rest of your stay in Kyoto. And I’m Hideo Ishizuki, Ambassador in Charge of Cyber Policy, Ministry of Foreign Affairs. And I have taken up this position as Cyber Ambassador last October. So it’s been exactly one year since I took up this position. And I must confess, this is totally a new area for me. During my 30 years of diplomatic career. And what I learned from my one year struggle is the value of human network across the board. And I really, in that sense, I really appreciate the personal relationship with Ambassador Fick and Ambassador Greenberger. Both of whom I met for the first time in Singapore exactly one year ago at Singapore International Cyber Week. And I believe that human network fostered by through this fellowship will be a good foundation for to- international cooperation. And now let me turn to turn on to you your question how Japan is organized around cyber security at the government level. In Japan we are facing increasing threats of malicious cyber operations including including those of ransomware and those against critical infrastructure like hospitals and the reported cases of ransomware incident has increased by 58 percent from 2021 to 2022 so it’s a huge increase. And to respond to these increasing threats Japan is strengthening its cyber security organizations. In April 2022 our National Police Agency established the Cyber Affairs Bureau and National Cyber Unit with 2,700 fully engaged personnel. Also the Ministry of Defense aims to increase the number of self-defense force personnel in cyber specialized units from the current 890 to 4,000 by fiscal year 2027. So this is a very ambitious plan but we are trying to achieve that goal. And in addition we are currently working on the challenges tasked by our national security strategy which was issued in December last year in order to strengthen the response capability in the cyber in the field of cyber security. And these challenges include as somebody has already mentioned to enhance public and private collaboration which is a big challenge. And also we are now trying exploring the way to introduce active cyber defense for eliminating in advance the possibility of serious cyber attacks. And also there is also need to reform the government structure. We need to set up a new organization which will coordinate cyber security policies in a centralized manner allowing us to take more effective hold of government approach across sectors. So these are the challenges we face. And as for Ministry of Foreign Affairs, the national security strategy has given us a task of enhancing international cooperation for information gathering and analysis and attribution and the public announcement as well as the formulation of international frameworks and rules including those at the UN for responsible state behavior in cyberspace. So in Japan threats in cyberspace has become to be viewed as more related to international security. Traditionally it’s been the area of the law enforcement agencies but now it has become more important to Ministry of Foreign Affairs. And this is because as somebody has already mentioned increasing geopolitical competition such as Russia’s war on Ukraine together with increased threats of disinformation campaign as well as increased risk by cyber attacks against critical infrastructure. Then I must confess that maybe the biggest challenge our ministry or we face is short of staff in the ministry. Policy challenges we need to address in cyberspace and the importance of diplomacy in this area never stop increasing. We need more staff to handle these increasing tasks. I think this may be a common challenge as Pablo has already said that this is a commonality side we face I think. All the Ministry of Foreign Affairs all over the world may face this kind of situation and I think this is where the value of this fellowship lies. This would be a very important endeavor to improve the level of diplomats in the cyberspace and I think that’s where the value of this fellowship is. With that I conclude my remark. Thank you very much.

Nathaniel Fick:
Mr. Suzuki-san, thank you for that and sharing your insight. So Ambassador Greenberger, you’re a career diplomat with a background in economics and financial issues and agricultural issues. How do you bring that experience to bear now negotiating and discussing cyber policy topics? Thank you Nate for that

Regine Grienberger:
question. Where could I start? I think cyber diplomacy as such is of a cross-cutting nature. So you don’t only deal with one particular field of experience or expertise within the foreign ministry but you have to connect several dots that lie mostly in different departments. Not only within the ministry but also within the government and Ambassador Suzuki mentioned this already. The joint approach of a government to tackle these challenges by cybersecurity has to be strengthened. And my personal background helps me because I am not specialized in any of the fields so I have a general approach by my own training. Then the second element that is important to understand and it was also mentioned but I would like to highlight it again and stress it a little bit more is that we are basically speaking about a security policy portfolio which means that there is a role for foreign ministries. It’s not only for agencies, homeland affairs, military. It is really a foreign and security policy issue and so a career diplomat like myself who has wandered through different bureaus and has served at different assignments sees also the commonalities of this particular field of security policy with others. So this I would say is another element that from my personal background helps me to deal with the portfolio that I have now. And then of course cyber diplomacy is also, also the internet is not new. Cyber diplomacy is quite a new avant-garde topic I would say in many of the ministries and is not very well structured even in our case where we established the first unit for cyber diplomacy in 2011. So you know 10, 12 years ago and still we have to fight for you know the awareness that Pablo also mentioned the awareness at the highest level of leadership in our ministry and what is what is necessary to do this is this entrepreneurial spirit that you perhaps also you know you can share you can relate to that. This spirit that means you have a new you have a product that is interesting for other people and you try to sell it to them. And then perhaps also something that is important to have confidence as a you know as a diplomat who has different experiences but has no cyber experience. I mean this is basically about diplomacy. So we are diplomats. We use the diplomacy toolbox. We don’t fix the computers of our colleagues. You might help them when the printer is out of order but basically we are diplomats. So the diplomatic toolbox is valid also here. Ambassador Ishizuki mentioned the multilateral negotiations. I mean this is tradecraft at its best. So we have to deal with the same you know terms of reference as in other fields. And you mentioned it that my last point it’s team sport. So you might not be an IT expert yourself but you have to work with IT experts and that is something that I have learned during my all of my career that I always have to turn to colleagues who know it better than myself and then my role is to bring all these arguments and perspectives to the table. Thank you for that really

Nathaniel Fick:
wonderful synopsis. I think that really ought to help drive home the point for all of us that that our colleagues who may not have as much exposure to these topics as we collectively now do need not be intimidated. Right. There is a role here for diplomats with diplomatic skills and we need to apply those skills in this new and emerging domain of diplomacy. So with that I think we have time for discussion. My colleague Catherine Fitrell will moderate any questions that come in from online viewers. But this is an opportunity for audience questions or comments. It’s an opportunity for the fellows to weigh in with more thoughts or questions and of course my colleagues. Yes let’s can we pass the mic only because otherwise it may be very hard to hear online. Thank you.

Audience:
Hi Ambassador Figg, Ambassador Hisuzuki, Ambassador Greenberger. The other key word that we learned in the different panels was the gaps between the global north and the global south. Not only must multi-stakeholderism but also this gap existing between both regions in the world. In terms of international cooperation, what is your best advice to digital ambassadors for developing nations in terms of capacity building and digital literacy?

Nathaniel Fick:
I’m happy to give my colleagues the first crack at that.

Regine Grienberger:
So the first thing that is to be done is to acknowledge the relevance of cyber security for national security. So that might be the case if you have a national security strategy or cyber security strategy from whichever angle you approach it. In the end it has to be clear okay this is a national security issue and somebody said it from you it’s also not a national it’s actually an international security issue. So there should be some you know some reflex in a foreign ministry to claim ownership for this topic. This is a leadership decision. Then the next one is of course literacy as you said is important but literacy can be acquired by but on different paths pathways. So I think there is a lot of opportunities for example online learning tools and so on. It doesn’t necessarily have to start with you know you don’t have to start with a cyber diplomat you can become a cyber diplomat on the job. So that would perhaps be my two advices.

Hideo Ishizuki:
Maybe I’m speaking from a bit of different perspective. In order to build up the capacity in this area maybe we are thinking that regional mechanism or regional effort might be might be a valuable might be effective and such that with ASEAN countries we’ve been working on here our capacity building support and we have set up a center in Bangkok which is called AJCCBC. AJCCBC ASEAN Japan Capacity Building Center in Bangkok and it was set up it was set up five years ago and during these five years we have given the training program through this AJCCBC more than more than 1,000 people from the ASEAN member state. So these kind of you know regional efforts might be kind of you know might be useful to level up the capacity of each state. And also just a bit of advertisement that there is also the World Bank is working on the capacity building support and they have recently set up a fund called multi-donor capacity building trust fund and multi-donor cybersecurity trust fund and this is a kind of you know worldwide effort conducted by the kind of you know World Bank and this is this is this kind of you know multilateral framework efforts is also useful to level up the capacity building of each country. Thank you. I can just add a

Nathaniel Fick:
my perspective on this. It’s easy as a as a cyber tech diplomat to spend all of your time and energy focused on the most sophisticated aspects of this work and all of the advantages that these emerging technologies can bring to our societies but we need to never forget that there are still 2.8 billion people 2.7 or 2.8 billion people on this planet who are not connected and without basic connectivity they will have little or no opportunity to participate in all of the advantages that we spend our time and energy on. So priority number one in in many regards in my view is focusing on that basic connectivity. A second observation is that capacity building actually applies to all of us. It’s it’s not just a matter of some more developed nations building capacity in some that are less developed in these areas. A key challenge for us at the State Department is building capacity inside our own organization where where we don’t have nearly the the skill base that we need in order to meet the challenges in the world in these areas. And one of the that can sometimes seem like a daunting challenge but there’s very little need to reinvent the wheel. We can take what we’ve done or what we’ve learned we can take what anyone has done or learned and try to share it or repurpose it for others in order to accelerate their path up that maturity curve. So we try to try to keep both of these aspects in mind when we’re when we’re building or negotiating or discussing policies across the full spectrum of technology issues. Okay I’ll need I’ll need someone with eyes behind me to we’ve got the video as

Christopher Tate:
well. So good morning everyone I’m Christopher Tate with Connect Free and Internet 3. I’d like to apologize first on behalf of all IT experts for making your jobs harder as diplomats. It wasn’t our really kind of idea to make the internet a hard thing to do but there is hope and there’s hope in the in the sense that the United States has really brought a lot of new standards online called zero trust and what that means is that we can re-architect the way that the internet works in order to really bring security into the heart of the internet and so there’s a lot of really hard people really hard work going on to make sure that the internet can be a new not will keep the keep the existing infrastructure in place but also extend it to a lot of places. We here at IGF have Connect Free here at IGF have announced kind of a new way of thinking about the internet where everyone can own their own IP address and it’s based off of public-private key pair so that means that everyone can essentially generate their own IPv6 address and therefore extend into these regions where there’s really hard to it’s really hard to bring in these network operation centers and other infrastructure so I really appreciate your delegation here and your time and consideration and I really think that you know there’s a lot of diplomatic side and also there’s a lot of things that we can do on the technical side and bringing bridging the gap as it were to connect these two technical and to book other things so thank you for your attention and time. Yes.

Audience:
Good morning Ambassador Isorzuki, Ambassador Greenberger and of course Ambassador Fick. I have a question directed to Ambassador Fick. So I have read somewhere that a certain Marine in the early 2000s stated that good commanders act and create opportunities great commanders ruthlessly exploit those opportunities so in order for these fellows for us to be great leaders as a person who rose from the ranks of leadership in many forms how would you advise us to go about in our post-gelics journey in affecting the necessary changes and updates in policy in our respective ministries and governments to reflect the lessons we have learned from this great program this great opportunity provided by the USDOS and Meridian. Thank you. Thank you for the

Nathaniel Fick:
question and I appreciate the researcher reading that may have gone into that question. Look a couple a couple observations here. I am new to this government bureaucracy and and still maybe a little more than a year in have the benefit of fresh eyes so a couple things that I would I would urge you to remember as you go back fully into your your roles on the other side of this fellowship are first assume good intentions on the part of your colleagues back to our last discussion there’s uneven understanding of these issues there are many different factors and considerations many different values to weigh we are going to have different points of view about what matters most how to rack and stack those priorities but I think most people working on these issues generally share a similar desire to to connect people to bring to benefits of technology to people to to mitigate the harms and to try to make the world a better place so first is assume good intentions I think at a very personal level that the next thing that I would urge is it’s it’s very helpful to have the ability to walk away and sometimes in our system people talk about that in financial terms that’s not what I mean I mean psychologically or emotionally don’t fall so in love with your position that you can’t consider other points of view and I think that’s actually quite hard to do the one of my favorite authors F Scott Fitzgerald said it wrote at one point that the the mark of a first-rate intelligence is the the ability to hold two opposing ideas in mind at the same time. And I think with these issues, you have to be able to do that. And you have to be able to walk away from an idea that maybe you’re deeply invested in, because these are complicated things and they change.

Garima Vatla:
Yes. Thank you. Is it working? Thanks a lot. I’m Garima Vatla from the Ministry of Foreign Affairs of Estonia, currently posted to Geneva. It’s been really a nice journey with these people all around the table. And I think the human component indeed is the most important thing about that. I would maybe continue which Ambassador Fick just said about having always those two sides or positions in mind, because I feel very often when we talk about cybersecurity issues, we focus on the security perspective, the threats that come from it, but we shall not forget that technologies as such bring us a lot of opportunities, as well as when we train people, we need to train them to also look at the digital perspective. Because I feel there’s a lot of sort of confusion around semantics or definitions of what is a cyber diplomacy, what is a digital diplomacy, and how they actually connect to each other. It was very interesting to hear Ambassador from Japan saying that there’s a need to increase the amount of people, because we have three countries behind the table with a very big administration, as I come from a country where the amount of diplomats is around 400, 500, and we have to tackle those sort of global issues all the time, right? So I would just, I think it’s more of a comment than a question that we need to really emphasize the capacities or the knowledge of people to understand that digital and cyber issues both are part sort of the global policy sort of questions, not as a domain or a field on its own. Thank you very much.

Nathaniel Fick:
Karen, thank you. Can I respond to your comment though, and just point out that Estonia may have four or 500 diplomats total, but Estonia is small but mighty, especially in these areas. And that’s actually a generalizable comment, I think. One of the great benefits of these technologies is that the capital expenditure required to develop them is pretty low. The scale benefits that they bring with them are pretty high. And so there’s a bit of a decoupling between traditional measures of national influence and the ability to influence the world on these topics. Estonia is a perfect example. A very bad thing happened in Estonia. And in the wake of that bad thing, because of national leadership and investment and focus, you developed this kind of world-defining expertise in this area that is incredibly inspiring, I think, to everyone around this table. And I think that’s a huge success story. It’s a repeatable success story in other places with focus and discipline and investment.

Audience:
Good morning, everyone. My name is Andrew. I’m from Jamaica. A question I have for ambassadors. Coming from an island state, very small, there are different priorities that compete with each other all the time. This particular issue intersects with all issues of development. It’s a developmental issue, I see it as well, because the harms that exist online, they sometimes undermine what is done back home in terms of different areas. What would be your advice for countries who this issue may not be at a high level? And what would be your recommendations to help us go back to our different countries and to convince, to persuade different leaders of government and at different levels to put this issue at the focus which it needs to be, given that a lot of things compete and with a small country, you have to decide what’s the net benefit of pursuing X or pursuing Y. So what would be, let’s say, your top three recommendations that would help an individual to go back home to persuade the movers and shakers who need to approve these things? Thank you.

Regine Grienberger:
Thank you. It’s on. Okay, I give the first and you do the other two. Okay, so my point would be, sometimes digital is the door opener and cyber follows. So focus on the opportunities is sometimes much more convincing, especially for leadership, than focusing on the risks. It’s also, I mean, if you look at it at the corporate level, it’s also sometimes that the Department for Digitization gets all the means. The CISO has to ask for money all the time because when everything’s fine, he has done his work, but nobody sees it. So I mean, a little bit, it’s the same with cyber and digital. Digital is much more comforting for leadership, but it’s just the same topic, just look from the other side, focusing on the opportunities. And many, many countries will look at it from this perspective, although the other perspective, the security perspective cannot be neglected. And if you want to have a sustainable transition, you have to cover, protect also the backside of the project. Thank you.

Hideo Ishizuki:
I think this is, again, another commonality we are facing, actually. And as I have said that we are suffering from the short of staff in the ministry, and this is also the matter relating to the priority. So we have to deal with this issue of awareness, and we have to pass with our leaders so that we can have more resources on cyber security issues. And actually, I mentioned earlier that the World Bank has created a cyber security trust fund, and they are also struggling this kind of element, because traditionally, development agenda is for digital, not for the cyber security, because cyber security, you cannot see the benefit of cyber security unless you got cyber attacks. So this is, again, you know that you have to show it, why cyber security is important. And I think World Bank is working on that, how to, it’s very hard to get the statistics to show the effect of cyber security efforts, or sort of return of the investment on cyber security stuff. But they are working on that, and I think maybe if you approach to the World Bank, they may have some good statistics to show the importance of cyber security investment. So this is just an information. Thank you.

Nathaniel Fick:
I would echo and agree with the comments from my colleagues, and I think a useful analogy here is, it doesn’t matter how many times you tell a child not to touch a hot stove, the child has to touch the stove to learn not to do it again. And so I agree with Ambassador Shizuki’s point that cyber security is a cost, and it’s really about avoiding bad things. And rather than learn from our own hard experience, touching the stove, we need to try to learn from others’ hard experience. And so part of our challenge is convincing our leaders that the bad things that we see happening other places could happen to us. And that is a hard argument to make in a busy world where they may have 40 priorities, and this is number 41. Which leads really in my mind to Ambassador Greenberger’s point, which is, and this has been true for me as well, focus on the opportunity, focus on the upside. Often that’s digital, and digital can be a path to cyber security.

Audience:
Hello, my name is Chittakiat Matapaniwat. I’m from Ministry of Foreign Affairs of Thailand. First of all, I wish to echo Ambassador Shizuki about the capacity building. I couldn’t agree more with what you just say about the capacity building, and then regional can be the way forward. Of course, I think on Thailand, we work closely with Japan for the ASEAN-Japan cyber security capacity building, and then we look forward to do more in our region. And capacity building can also be more of the global initiative, like what the Bureau of CDP has been doing for this program, this fellowship. I have one question that may be, or may not be echoed with other countries, but I think in my country, Thailand, when we talk about cyber security, most of the people think about cyber crime. They want to see how we can tackle cyber crime, but they don’t really know there is another aspect of security. Let’s say, in general security, as Ambassador Greenberger had mentioned about, how we can, of course, as a diplomat, and as a cyber diplomat who working day to day, every day with this issue, I know how important it is, but for the other ministry, other government officials, they don’t really know how we can raise awareness, and how we can convince them that this issue merit another investment, further investment, to make us prepare for the cyber attack or other challenges. Thank you.

Regine Grienberger:
Okay, let’s start with the investment part. There has also to be a trade-off between domestic investment and investment abroad in this field. That is when I talk with my colleagues about cyber capacity building, I always describe it as a two-way street. So you might be investing in a program that helps law enforcement in other places to get up to speed to what cyber criminal organizations are able to do, and you spend that money abroad, but at the same time, you get indirect reward from that activity, because this is by nature cross-boundary activities that you are combating here. So if your partner is able to reduce the level of activity, it will benefit yourself. So I think that is one trade-off that you can use in your argumentation towards other parts of the government. And the second one is, of course, should I go for investment in my own structures as for ministry, for example, or is the issue better taken care of in other parts of the governments? So increasing the international competencies, for example, of your cybersecurity agency. And in our case, we have decided that all of the players in the government architecture, or in the whole of, we have a whole of society approach to all of the actors in this architecture should be able to deal with the international aspect. So we have a cybersecurity strategy, a national cybersecurity strategy that defines one of the action fields is Europe. For us, it’s the first framework, Europe and international affairs. But it’s not a list of taskings only for the foreign ministry. It’s a list of taskings for all the actors in the field.

Hideo Ishizuki:
I think maybe the role of the Ministry of Foreign Affairs in the government system is to collect the information from abroad, on the incident cases or threat intelligence or threat awareness. And with that, maybe we can convince other agencies that you have to work hard in order to protect yourself from these threats. So information gathering and dissemination of these informations inside the government is I think this is one of the roles and one of the things we can do vis-a-vis the other agencies. And as for the cyber crime, I think that nowadays, most of the threatening, I think one of the most threatening, one of the biggest threats in the cyber space is a ransomware. And for that, I think the US and United States has set up a framework called Counter-Ransom Initiative. And I think here, nowadays, I think there are more than 40 countries have already participated in this framework. And I think in our case here, our national police agency is quite active in that framework. So if you can have this kind of international framework where police agencies or law enforcement agency can participate, that would give them a kind of cause or interest to invest more heavily on the cyber security side. Thank you.

Nathaniel Fick:
And with that, all good things come to an end. Not only the session this morning, but this fellowship here for our inaugural class of Jellix Fellows. I wanna thank my friends and colleagues, Regina, Hideo, for not only joining us here today, but for your partnership around the world. And I really wanna thank our class of fellows for committing your time and energy to each other and to the fellowship. I hope it’s been a good year from your perspective. It’s been really exciting for us to see the energy in the group. We look forward to working with you now out in the world, all around the world in the years to come. We’re gonna look to you to help us set the tone for the classes that follow. And I just wanna conclude maybe with a round of applause to thank you for all your hard work.

Audience:
Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. We wish you well and we really are in it together. Take care. We’re all separate, but together we are? That is true. Thank you for doing this. Thank you for coming all the way down here. Yeah, good, good, yeah. It’s always nice to get out of the capitol, yeah. Ah, for sure, yeah. I’m a believer now, yes. Coming to Kyoto. It’s a special place, yes. Yeah, yeah, it’s good. Well, I’m off to the airport, so I’ll see you somewhere soon. Good luck in Singapore. Give David Cole my best. I’m sorry to miss him. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. It’s great. Good. Okay. Bye-bye. Thank you. Thank you. There was one person who wasn’t here, so she’s right here. It’s good. Great. Let’s go do it. Perfect. Yeah, yeah, yeah. Yeah, yeah, yeah. Bye. Bye. Bye. I’ll have that. And then I’ll come back and drop it in. I’ll see you soon. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Audience

Speech speed

168 words per minute

Speech length

1510 words

Speech time

539 secs

Christopher Tate

Speech speed

201 words per minute

Speech length

308 words

Speech time

92 secs

Garima Vatla

Speech speed

184 words per minute

Speech length

324 words

Speech time

106 secs

Hideo Ishizuki

Speech speed

141 words per minute

Speech length

1559 words

Speech time

665 secs

Maritza Ristiska

Speech speed

137 words per minute

Speech length

390 words

Speech time

170 secs

Nathaniel Fick

Speech speed

158 words per minute

Speech length

2501 words

Speech time

947 secs

Pablo

Speech speed

190 words per minute

Speech length

549 words

Speech time

173 secs

Regine Grienberger

Speech speed

144 words per minute

Speech length

1296 words

Speech time

540 secs

Sharif

Speech speed

181 words per minute

Speech length

332 words

Speech time

110 secs

Sumiya

Speech speed

148 words per minute

Speech length

353 words

Speech time

143 secs