Artificial Intelligence & Emerging Tech

9 Oct 2023 06:15h - 07:45h UTC

Event report

Speakers and Moderators

  • Pamela Chogo,Tanzania IGF
  • Jörn Erbguth
  • Kamesh Shekar, Youth Ambassador at The Internet Society
  • Tanara Lauschner
  • Umut Pajaro Velasquez
  • Victor Lopez Cabrera

Moderators

  • Jennifer Chung

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis explores multiple aspects of the relationship between artificial intelligence (AI) and developing countries. One significant challenge faced by developing countries is the limited internet connectivity and lack of electronic devices. This hampers their ability to fully harness the benefits of AI. Most people in these countries are far from internet connectivity and lack electronic devices, making it difficult to access AI services. This is seen as a negative aspect of the situation.

However, despite these resource limitations, a question is raised about how developing countries can still benefit from AI services, despite their limited resources. This suggests that creative solutions and strategies can be explored to ensure that developing countries can leverage AI. This neutral stance highlights the importance of finding alternative ways for these countries to benefit from the advancements in AI.

Furthermore, AI is viewed as an opportunity for up-skilling and re-skilling youth in developing countries. This positive argument suggests that AI can provide educational opportunities and empower young people in these regions. Equipping youths with AI skills can better prepare them for the future job market and contribute to the economic growth of their countries.

Connectivity issues in developing countries for leveraging AI are also highlighted. This negative sentiment emphasizes that without adequate infrastructure and connectivity, the full potential of AI cannot be realized. It underscores the importance of investing in resilient infrastructure and promoting sustainable industrialization in these regions.

On a more positive note, there is support for the use of generative AI and data governance at a local level. This viewpoint suggests that AI can be a valuable tool for societal progress and development, and local communities should take advantage of it. The positive sentiment towards generative AI and local data governance indicates a belief in their ability to contribute to the achievement of global goals such as industry innovation and strong institutions.

Regulation in the field of AI is recognized as a necessary measure. It is understood that each country will eventually establish its own regulations for AI. Professionals in the field are urged to consider the implications and importance of regulation. Similarly, policymakers are encouraged to consider the need for regulations in the artificial intelligence industry. This neutral stance highlights the importance of ensuring ethical considerations, privacy protection, and responsible use of AI technologies.

In summary, the analysis sheds light on the challenges faced by developing countries in relation to AI, such as inadequate connectivity and limited resources. It also highlights the potential benefits of AI, such as up-skilling and re-skilling opportunities for youth. The need for investing in infrastructure and connectivity is emphasized, along with the importance of local data governance and regulation in the field of AI. These insights provide valuable considerations for policymakers, professionals, and stakeholders involved in the integration of AI in developing countries.

Kamesh Shekar

The use of AI technology needs to be more responsible, considering both the intervention and ecosystem perspectives. Various stakeholders, such as technology companies, AI developers, and AI deployers, have unique responsibilities in ensuring the responsible use of AI. Fine-tuning in the operationalization of AI principles is crucial. Certain principles, like “human in the loop,” can have different interpretations at different stages of AI deployment.

A balanced approach is required for the regulation of emerging technologies to prevent the creation of problems while solving traditional ones. The implementation of regulatory interventions in the context of emerging technologies is important, but regulations should not disrupt beneficial technological innovations, especially in developing countries.

Balancing AI utilization with innovation can unlock maximum benefits. India’s Digital Personal Data Protection Act serves as an example of finding this balance. The act focuses on the usage of data value as well as the protection of privacy. This approach demonstrates how AI can be used to drive economic growth and innovation while also protecting individual rights and data.

Regulatory frameworks for AI should include both government compliance and market-enabling mechanisms. India, as the chair for the Global Partnership on Artificial Intelligence, is working towards creating a well-thought-through regulatory framework. Consensus-building at the global level for AI regulation is being suggested. It is important for legislation to evolve to keep up with the rapid advancements in AI and emerging technologies.

The ongoing debate over the role of consent in data processing and utilization is a significant concern. While consent has traditionally played a critical role, its effectiveness in new technological contexts such as AI and the Internet of Things (IoT) is debatable. There is a growing need for new mechanisms to safeguard data, in addition to consent, to protect individual privacy and ensure responsible AI use.

Advocates call for data protection regulations that are in tune with technological advancements. As AI and emerging technologies continue to evolve, regulations must adapt to keep up. Innovative regulations are necessary to work in tandem with technological advancements and ensure responsible AI use.

Data governance in relation to AI is a crucial concern for the future. Emerging technologies raise important questions about how publicly available personal information is used. With new data protection regulations, the use of AI technologies with this information needs to be carefully considered to avoid potential misuse.

Regulatory considerations for AI should take into account the positive aspects and innovations that the technology brings. While regulations are necessary to address any potential risks, they should not hinder or disrupt the beneficial technological innovations associated with AI, especially in developing countries.

Global coordination is necessary for the progressive development of AI. Multiple entities with an interest in AI need to come together for discussions and collaborations. This will aid in addressing common challenges and ensuring responsible and ethical use of AI on a global scale.

The role of government and private entities is crucial in advancing AI. Governments and private sectors can work together to drive the progression of AI by implementing AI governance frameworks and utilizing market mechanisms to encourage adoption.

Trust and compliance play significant roles in AI systems. Trust can serve as a competitive advantage for AI adopters, and compliance, while burdensome, can bring trust in AI systems. It is important for regulations and frameworks to be seen as beneficial rather than burdensome to foster trust in AI technology.

In conclusion, the responsible use of AI technology requires a balanced approach that considers intervention, ecosystem perspectives, fine-tuning of AI principles, and regulatory frameworks. Legislation needs to evolve to keep up with the pace of AI and emerging technologies. Global coordination and collaboration are essential for the progression of AI, and the role of government and private entities is crucial in advancing responsible AI use. Trust and compliance are key factors in building confidence in AI systems. By addressing these various aspects, AI can be utilized in a responsible and beneficial manner.

Pamela Chogo

The importance of increasing awareness of AI in Tanzania was highlighted due to the low level of understanding of AI in the country. Many people are using AI or contributing to it without fully comprehending its implications. There is a misconception that AI is solely a technical aspect and not a social-technical aspect. To address this, it is argued that there needs to be a raise in awareness and education about AI, including its benefits and challenges.

The importance of ensuring fairness, explainability, robustness, transparency, and data privacy in the AI development process was emphasized. It is crucial to examine how data is gathered and the ethical considerations involved. The models and frameworks used in AI should also be scrutinized. By adopting these principles, AI systems can be developed in a more responsible and accountable manner.

An AI Convention is proposed as a means to create a safe and regulated environment for the use of AI. Drawing parallels to the importance of environmental conventions, it is argued that an AI Convention should be established to ensure all stakeholders adhere to ethical standards and guidelines. This convention would provide a framework for the responsible use and development of AI, allowing individuals and communities to benefit from its potential while minimizing risks and harm.

Concerns were raised about the digital gap in Tanzania and the lack of clear guidelines and standards in AI development. Technology is advancing rapidly while the country is still grappling with bridging the digital divide. To address this issue, it is suggested that comprehensive guidelines and standards be put in place to ensure AI development is inclusive and accessible to all.

The necessity of AI advocacy and awareness creation for policymakers, developers, users, and the community at large is emphasized. There is a great need for advocacy initiatives to promote AI awareness and educate various stakeholders about its potential and impact. Additionally, ethics in data collection should be ensured, and the accuracy and validity of data used in AI systems should be upheld.

AI is seen as beneficial in sectors like health and agriculture. AI can assist in predicting outcomes in agriculture, and AI-based devices can be used in hospitals to address the shortage of experts and resources. Furthermore, community-based AI solutions are identified as a way to mitigate challenges related to resource constraints and access to information and knowledge.

Despite resource constraints, it is maintained that Africa can still benefit from AI with the right approach. Many African countries are facing resource challenges, but the use of AI can provide community-level solutions and fill knowledge gaps.

The global nature of AI is emphasized, with AI being seen as a global asset. Cooperation and partnerships at an international level are crucial to fully harness the potential of AI and address the challenges it presents.

There is a call for more discussion forums and sharing of work on AI. By fostering collaboration and knowledge exchange, the development and understanding of AI can be accelerated.

Finally, the importance of an AI Code of Conduct or AI Convention is underscored. Such a code or convention is essential in establishing ethical standards, ensuring transparency, and promoting responsible and accountable use of AI.

Jörn Erbguth

During the EuroDIG session, the topic of AI regulation was extensively discussed. The importance of AI literacy and capacity building was emphasized as crucial for effectively utilizing AI. The session recognized that everyone, including children, needs to understand AI and its potential applications.

Transparency and explainability in AI regulation were explored. Although it was acknowledged that achieving full transparency and explainability with current technology presents challenges, it was considered a critical aspect of AI regulation. The session also highlighted the need to address issues such as discrimination, data governance, and other ethical considerations.

A global set of core principles and a multi-stakeholder approach were proposed as essential for effective AI regulation. The session emphasized that humans should ultimately remain in control of AI, and different regions may need to apply these principles in different ways, considering their societal contexts.

Data governance’s significance in AI development was emphasized, notably through the implementation of data governance policies like the EU Data Governance Act. This is seen as crucial to prevent monopolies on training data and ensure equal access to AI technology. Access to necessary data for training AI systems was considered fundamental for emerging nations and small-medium enterprises.

The EU’s risk-based approach to regulating AI, where systems with higher risks face more stringent regulations, was discussed. However, doubts were raised about the effectiveness of this approach. Some participants argued that regulations should focus on the applications using AI rather than the technology itself, due to the diverse nature of AI and its various use cases.

Flexibility was identified as a key factor in addressing potential risks and applications of technology. Given the uncertainty surrounding future developments, a flexible regulatory framework would enable effective adaptation and response.

Regulating technology before its development was recognized as having limitations. While the EU positions itself as a technology regulation leader, the specific form of regulation for the next decade remains uncertain.

In conclusion, the EuroDIG session emphasized the importance of AI literacy, global principles, data governance, and equal access in AI regulation. Challenges in achieving transparency and explainability were acknowledged, advocating for a flexible approach to address potential future risks. The session highlighted a multi-stakeholder approach to ensure effective and responsible use of AI technologies.

Tanara Lauschner

The analysis provides a comprehensive overview of different perspectives on artificial intelligence (AI) and its impact on social dynamics. It highlights both the positive and negative aspects of AI and emphasizes the need for responsible development and governance.

One viewpoint acknowledges the transformative potential of AI and its ability to generate novel content and integrate diverse ideas. This is exemplified by the outstanding capabilities demonstrated by large language models. These models can merge and connect ideas, leading to the creation of innovative concepts. The argument here is that AI holds great promise for various social dynamics.

On the other hand, challenges in AI are also identified. One such challenge is the manifestation of unintended behaviors and biases in large language models. Their decisions often lack transparent explanations. The argument presented here is that there is a need for interpretability and control in AI systems.

The importance of multi-stakeholder discussions in AI policy and governance is emphasized. The establishment of the Brazilian Artificial Intelligence Strategy under the Ministry of Science, Technology, and Innovations is highlighted as an example. The argument made is that involving all stakeholders in collaborative discussions allows for the sharing of ideas and the formulation of consensus policies.

Preserving and leveraging the internet and digital ecosystem as a catalyst for innovation is recognized as crucial. However, the analysis does not provide specific evidence or supporting facts for this viewpoint.

Another important aspect highlighted in the analysis is the ethical considerations in AI development. It is argued that AI should focus on responsibility, fairness, equality, and creating opportunities for all. Unfortunately, no supporting facts are provided to substantiate this perspective.

Offline AI applications are identified as a valuable solution for areas with sporadic internet availability. Language translation tools, health diagnostic apps, and AI-driven services hosted by community centers in remote areas are cited as examples. This viewpoint highlights the positive impact of AI in bridging the digital divide.

However, financial and accessibility barriers are acknowledged as limiting factors for the utilization of AI technologies. The argument presented is that people without internet access or financial means may not be able to benefit from AI solutions.

The analysis also emphasizes the necessity of community-driven governance for safe AI. The Brazilian Internet Governance Forum (IGF) and Lusophone IGF are highlighted as platforms where discussions on AI topics have taken place. The argument made is that community-driven governance ensures the safety and responsible implementation of AI technologies.

International cooperation is identified as a critical requirement for ensuring the inclusivity of AI. The fostering of debates and actions within the Brazilian IGF community and among National and Regional Internet Governance Forums (NRIs) is seen as essential in progressing towards this goal.

The need for trustworthiness and understandability of AI systems is emphasized. The argument put forward is that for AI to be trusted, it is necessary to understand how these systems work, what they do, what they don’t do, and what we don’t want them to do.

In conclusion, the analysis presents a balanced understanding of the impact of AI on social dynamics. While acknowledging its transformative potential, it also highlights challenges such as bias and lack of transparency. The analysis advocates for responsible development, multi-stakeholder discussions, and community-driven governance to ensure the ethical and inclusive implementation of AI technologies. Trustworthiness, understandability, and human well-being are identified as crucial considerations in the use of AI.

Jennifer Chung

During the meeting, the importance and potential of artificial intelligence (AI) were extensively discussed by the speakers. It was acknowledged that AI plays a crucial role in societal development and offers transformative opportunities across various sectors. The Prime Minister of Japan, Kishida-san, specifically highlighted the significance of AI in his speech. The importance of effective governance and regulations to ensure accountability, transparency, and fairness in the development and use of AI systems was emphasized.

To address this, policies, regulations, and ethical frameworks were deemed necessary to guide the development of AI and ensure its responsible deployment. The aim is to establish guidelines that prioritize human values and rights while avoiding any negative consequences. Secretary General Antonio Guterres established a high-level advisory body on artificial intelligence to further support these efforts.

An interesting aspect discussed was the involvement of various regions in AI discussions and policy recommendations. Representatives from regional IGFs in Tanzania, Germany, Panama, Colombia, Brazil, and India actively participated in the discussions. Each regional IGF highlighted different topics and priorities that are important in their respective jurisdictions and home regions, reflecting the diverse perspectives and challenges faced globally.

Furthermore, the potential of AI to address societal challenges was extensively emphasized. The speakers highlighted that AI has the capacity to drive economic growth and can tackle various challenges ranging from healthcare and education to transportation and energy. This highlights the potential of AI to contribute to the achievement of Sustainable Development Goals (SDGs) such as SDG 3 (Good Health and Well-being), SDG 4 (Quality Education), and SDG 7 (Affordable and Clean Energy).

The human-centric approach to AI was another significant point of discussion. It was stressed that AI should aim to benefit society as a whole and avoid increasing the digital divide, especially for rural and indigenous populations. It was also highlighted that careful consideration of language barriers and cultural sensitivities is crucial in the design and implementation of AI technologies.

The importance of multi-stakeholder collaboration was emphasized in the development of AI regulations. The speakers recognized that addressing the complexities of AI requires input from various stakeholders, including government, industry, academia, and civil society. Collaboration and dialogue among these stakeholders are crucial for creating robust and inclusive regulatory frameworks.

Efforts to coordinate in the development of AI regulatory frameworks were deemed essential. It was suggested that instead of reinventing the wheel, existing good practices and frameworks should be utilized and built upon. This highlights the importance of avoiding duplication and ensuring efforts are channeled in the most effective manner.

The meeting also brought attention to the significance of awareness and capacity building around AI. Speakers stressed the need to educate and build knowledge around AI, as it is a tool that can greatly improve human societies. Digital literacy and AI literacy were identified as crucial components in the successful implementation and adoption of AI.

In conclusion, the meeting underscored the importance of AI in societal development and how it can address various challenges. It highlighted the need for effective governance, regulations, and ethical frameworks to guide the development and deployment of AI. Collaboration among different regions and stakeholders is essential for creating inclusive and comprehensive policies. Additionally, the meeting emphasized the significance of awareness and capacity building to ensure the successful integration of AI into society. The European Union was commended for its advanced approach to AI regulation and risk management. Overall, the discussions emphasized that a responsible and human-centric approach is vital to harness the full potential of AI for the benefit of all.

Victor Lopez Cabrera

According to the information provided, Latin America is predicted to become an ageing society by 2053, with the number of individuals aged 60 and above surpassing other age groups. This demographic shift highlights the necessity for enhanced healthcare provision for the elderly. AI-driven automation in healthcare is identified as a potential ally in addressing this challenge, especially in regions where human resources are limited. The use of AI technologies has the potential to improve healthcare services and contribute to better outcomes for the elderly population.

Furthermore, AI is also seen as instrumental in the field of education, particularly in promoting intergenerational learning. One example mentioned is a pilot project in Panama, where seniors and younger tech aides worked together, encouraging joint exploration of technology usage. AI has the capacity to facilitate education and bridge the gap between different generations.

While AI presents numerous opportunities, it should not replace genuine human connections. Interactions with AI, regardless of their sophistication, cannot fully substitute the importance of human interaction and connection. This perspective emphasizes the need to balance the use of AI technology with maintaining genuine human relationships.

Another significant consideration is the protection of data privacy. In an age characterised by concerns about data privacy, it is crucial for citizens to have the necessary AI literacy to discern when to share personal data and when to abstain. The responsible usage of AI should not compromise individual privacy, highlighting the importance of a balanced approach in AI implementation.

The responsible management of AI technology is deemed essential. As AI applications and methodologies continue to evolve rapidly, ensuring trust and maintaining ethical practices in the development, deployment, and use of AI systems becomes crucial. The implementation of a trust-certified system is proposed, as trust lies at the heart of the data sharing dynamic.

The analysis also suggests that the Global South should contribute more data to AI algorithms. It is stated that the biological markers of elderly people vary according to geographical and cultural context, and such diversity is not adequately represented in current datasets. The inclusion of locally specific data is deemed necessary to address complications in diagnostic tools and improve the efficacy of AI applications in these regions.

Efforts to address the digital divide are viewed as a shared responsibility. In Panama, for instance, despite being a small country, significant gaps in digital device ownership and internet access persist. Organisations such as the Internet Society are working towards establishing community networks for indigenous people, aiming to bridge the digital divide and promote equal access to digital resources.

Upskilling and reskilling in AI are highlighted as crucial for individuals and communities. However, this process should not solely focus on developing technical capabilities but also on the development of soft skills and humanity. The dynamic interaction between freshman students and elderly individuals, as observed in the speaker’s experience, was found to offer valuable opportunities for both technical and personal growth.

The importance of explainability in AI systems is also emphasized. The speaker suggests that if a computer cannot explain its behaviour, people will not trust it. Therefore, achieving explainability in AI applications is crucial to enhance trust and adoption of AI technologies.

Regarding Large Language Models (LLMs), they are viewed with some degree of scepticism due to their complex mathematical and computational nature. They are often considered as black boxes, lacking transparency in how they arrive at their outputs. Thus, research organisations and academic institutions are called upon to play a role in helping citizens understand AI and navigate its complexities.

In conclusion, while AI presents numerous opportunities in various sectors, it is important to approach its implementation with caution and responsibility. Genuine human connections should not be replaced by AI, data privacy should be safeguarded, and AI technology should be managed responsibly. Furthermore, the inclusion of diverse data, efforts to bridge the digital divide, and the development of holistic skills in AI education are essential for a balanced and equitable AI-driven future.

Umut Pajaro Velasquez

The discussions on AI governance stressed the importance of adopting a transversal approach that includes perspectives from the global south and promotes meaningful partnerships. It was argued that the majority of the world’s population resides in the global south, and therefore, their insights and contributions are crucial to creating a resilient governance framework. By incorporating diverse perspectives, AI governance can avoid being monopolised by the global north and instead reflect the needs and aspirations of a global community.

Another key point discussed was the integration of a human rights approach into each phase of the AI life cycle. It was emphasised that AI systems should prioritise and safeguard the rights of under-represented communities, such as black, indigenous people, people of colour, and LGBTQI+ communities. By embedding human rights principles into the design, development, and implementation of AI, it is possible to mitigate the perpetuation of existing inequalities and ensure equal access and opportunities for all.

The participants in the discussions also recognised that AI regulation should be a collective responsibility of all stakeholders. True progress in AI development and implementation requires collaboration and cooperation among governments, the private sector, academia, and other relevant actors. The involvement of multiple stakeholders ensures that a wide range of perspectives and expertise are considered, allowing for more comprehensive and effective regulation.

Concerns were also raised regarding the environmental impact of AI and emerging technologies. It was highlighted that as AI becomes more widely implemented, it is necessary to address the consequences it may have on our environment. Finding ways to minimise AI’s carbon footprint and ecological impact will be essential for achieving sustainable development and combating climate change.

Another important aspect that was discussed is the need to review and strengthen data protection rules for AI technologies and future developments such as quantum computing. Data plays a central role on the internet and in internet governance. Therefore, ensuring robust data protection rules, privacy measures, and accountability mechanisms are in place is crucial to build trust and maintain the integrity of AI technologies.

The discussions underscored the significance of providing access to AI tools and technologies for youth. It was noted that AI can be a valuable educational resource that engages young people and provides them with hands-on experiences in real-life applications. By offering the youth access to AI tools, we can nurture their skills and empower them to contribute actively to the future of AI and technology.

Furthermore, community efforts were deemed fundamental in AI learning and development. Acknowledging that AI is a collaborative field, suggestions were made to organise events such as hackathons where diverse individuals can come together to understand, improve, and democratise AI tools. By fostering a sense of community, AI can be developed in a more inclusive and equitable manner.

The discussions also emphasised the importance of continuing the dialogue and sharing insights from AI developments in different regions and nations. These discussions provide opportunities to better understand the real-world applications of AI and facilitate the formulation of improved governmental policies that address societal challenges effectively.

It was argued that to ensure the ethical and responsible use of AI, there should be a commitment to sharing AI discussions with governments and the final users. Open discussions and involvement of AI users can lead to more informed decision-making processes and a more human-centric approach. By prioritising the needs and values of users, AI systems can better serve society and complement human well-being.

In conclusion, the discussions on AI governance highlighted the need for a transversal approach that incorporates perspectives from the global south and promotes meaningful partnerships. The integration of a human rights approach into the AI life cycle, the collective responsibility of all stakeholders in AI regulation, and the consideration of the environmental impact of AI were also emphasised. Strengthening data protection rules, providing access to AI tools for youth, fostering community efforts, sharing insights, involving governments and final users, and prioritising human-centred AI were all identified as essential components for the responsible and beneficial development and use of AI technologies.

Session transcript

Jennifer Chung:
. . . Thank you for your patience. We are waiting for our organizer and we will begin in a few minutes. We appreciate your patience. Thanks. . . and governance for safe AI. My name is Jennifer Chung, I’ll be your moderator for this session. I wear a few hats in this space, but for this hat I am representing the Secretariat of the Asia Pacific Regional IGF. We have a lot of very interesting viewpoints from across the national, regional, sub-regional and youth IGF initiatives, and they are going to share with you, especially on the different developments, the topics, the issues, and all of the policy recommendations that come up from their meetings. The NRI network is a very strong network of 160 plus, and I think still increasing. And we also, hopefully people in the audience, if you would like to join the Zoom room, you’ll also see our NRI colleagues who will be giving their interventions and discussions also remotely. Not everyone is here in Kyoto with us, but of course this is the Internet Governance Forum, so it is a very good hybrid opportunity for everyone to be able to participate well. Speaking about artificial intelligence, this morning during the opening ceremony we heard from the Prime Minister of Japan, Kishida-san, about the importance of AI, and he unveiled the plans of the G7 to actually draw up a code of conduct for developers of generative AI. And we also heard from the welcome remarks from Secretary General Antonio Guterres about the establishment of the high-level advisory body on artificial intelligence. With all of this as a backdrop, and also indeed during this very jam-packed schedule of this IGF, you’ll be noticing an increase of a lot of the sessions that touch upon AI, emerging technology, and specifically on the governance, and the safe governance, and appropriate governance of this emerging technology. When we’re talking about national, regional, sub-regional, and youth initiatives, each of them have. different topics that are important in their respective jurisdictions and home regions. And it is really important to remember that AI really holds a immense importance for our societal development and offers a lot of transformative opportunities across various sectors. AI has a potential to enhance efficiency, productivity, and innovation, and drive a lot of economic growth and address a lot of challenges we have in the society. From healthcare, of course, coming out of pandemic, from education to transport and energy, this emerging tech can really revolutionize the processes and improve a lot of decision-making. But amidst all of this really good things that AI can bring us, we always have to remember that there are harms and we definitely need good governance to be able to leverage its actual benefits to humanity. So talking about safe AI, we need policies, we need appropriate regulations and ethical frameworks to guide the development of this. We need to ensure accountability, transparency, and fairness in the design, the deployment, and use of AI systems. So what is effective governance and how can we promote responsible AI development? So I’m very happy to be able to introduce to you the panel of speakers that we have to share with you the different learnings and discussions from pretty much around the globe. Online, we have with us Ms. Pamela Chogo from Tanzania IGF, and she’ll be also presenting online. We have yet another online speaker. Apologies if I get your name pronounced wrong. It is Jeanne Ergbeil. Guth from Germany and he will be presenting the good discussions and the policy recommendations that came out of EuroDIG. We also have Victor Lopez Cabrera and he will be giving us a flavor from the discussions that were had in Panama IGF. Also online we have Umut Pajaro Velazquez and he will be presenting on the good discussions that were had in YouthLAC IGF as well as YouthColombia IGF which will be having its meeting later on this year. So enough of the recommendations and the introductions of the speakers online and here on the panel we have with us on the far end we have Ms. Tanara Lauschner. She will be talking about the developments in Brazil and she will be representing Brazilian IGF and right next to me we have Kamesh and he is actually a ISOC ambassador for the IGF so another youth representative and delegate and he will be presenting the viewpoints from India IGF as well. So I really want to start off with our speaker on stage if that is okay and then I will turn to our remote speakers as well online. But first I really want to ask Ms. Tanara in terms of Brazil and the development there what is the crucial points of discussion, where are the pain points, where are the recommendations coming from and what is the most important issues right now facing I guess Brazilian community and also the LAC region, Ms. Tanara.

Tanara Lauschner:
Thank you Jennifer. Hello everyone. First of all I would like to thank the IGF Secretariat for organizing this session and I would also like to thank the host country Japan and the local Kyoto team. It’s my first time here and I am excited to learn more about the country. As the coordinator of the Brazilian IGF I’m keenly aware of the challenges that all teams involved face in carrying out this event. Congratulations, everything has been executed perfectly. I will try to add… I will address both questions, but I will definitely touch as well on other topics of the sessions. Regarding the theme of this session, AI should be recognized as an emergent technology with the genuine potential to transform various social dynamics. For example, concerning scientific discovery, we are on the cusp of a pivotal shift in the trajectory of scientific exploration. As the world delves deeper into the digital era, our collective scientific understanding and discourse are evolving concurrently. This evolution has resulted in an overwhelming volume of data, providing exciting opportunities for computational platforms. The convergence of this societal and technological trend suggests that artificial intelligence will bring a significant transformation. By why? Recently, large language models have demonstrated outstanding capabilities in generating novel content and integrating ideas. They show potential in thinking not just with language, but in logical structures based on code, suggesting they might generate innovative concepts by merging and connecting diverse ideas. However, the strong abilities of large language models, compared with challenges concerning interpretability and control, these models, the black boxes, can manifest unintended behaviors, biases, and their decisions frequently lack transparent explanations. In 2020, Brazil established the Brazilian Artificial Intelligence Strategy under the Ministry of Education. of science, technology, and innovations. The objective is fostering innovation, bolstering economic competitiveness, and improving the quality of life for its citizens, always with an ethical and responsible lens. As a multi-stakeholder committee since 1995, CGI.br has collected valuable insights throughout this journey. A multi-stakeholder approach is a set of established practices rooted in the belief that all stakeholders should collaboratively discuss, share ideas, and forge consensus policies. It’s a challenging endeavor, but in the same way, the AI policy should be driven by multi-stakeholder discussions while also ensuring the preservation of fundamental rights. This multi-stakeholder model can further enhance international cooperation for ethical AI governance. CGI.br is conducting the establishment of an artificial intelligence observatory in Brazil. This initiative aims to chart AI governance strategies and regulations, coordinate the development of open datasets for machine learning training specific to Brazil, and elaborate indicators to track AI research, development, and applications in the country. This observatory is still under construction, and we hope it can contribute to AI research in the internet governance community. We must not forget the main characteristics that led the evolution of the internet throughout its history, innovation. The internet and the digital ecosystem must be preserved and leveraged as a key catalyst for innovation, as a basis for development, addressing past, present, and future concerns and technologies, emerging technologies such as artificial intelligence, and so on. In order to extract benefits for people and drive the development of AI, we must first development of our world with responsibility, fairness, equality and opportunities for all. All these aspects are also key when looking ahead to the Global Digital Compact, but also to bear in mind the importance of the digital agenda to fulfill the sustainable development goals. For now, this concludes my inputs. Thank you very much.

Jennifer Chung:
Thank you Tanara. A lot of very good developments, especially led by CGI BR and a lot of interesting discussions also that illustrate the very complex landscape right now that the Brazilian community is talking about. I would like to stay in the LAC region and that’s one of the actually highlights of the NRI networks. We are really spread all over the globe. I’d like now to turn to one of our remote presenters, Victor, who will give us a little bit of, I guess, the developments covering an emphasis on AI and health and education. Victor, if you’re able to take the mic, the floor is yours.

Victor Lopez Cabrera:
Thanks so much. I really appreciate the invitation and I thank IGF Secretary for inviting us to be presenting our short ideas about these topics. While every sustainable development goal holds significance, my emphasis is on enhancing healthcare and endorsing quality of education because they are the pillars of social development. In that regard, by 2053, Latin America is predicted to evolve into an aging society where those age 60 and above will surpass other age groups in sheer numbers. AI possesses the potential to bolster the silver economy, crafting products and services is tailored for the elderly. Ensuring the health and well-being of seniors is crucial, given the limited human resources sometimes we have. AI-driven automation emerges as a valuable ally. While AI’s health care benefits extend to all citizens, our present focus zeroes on the elderly, especially in this demographic growth rapidly. All sectors, from government to private enterprises, must collaborate on solutions. Recently in Panama, hosted the AI World American Forum on the Silver Economy. The event facilitated discussions on digitizing health services with AI and enhancing tech illiteracy among seniors. A standout pilot project featured seniors actively working alongside younger tech aides, epitomizing the spirit of intergenerational learning. Both groups collaboratively explore tech usage and hone interpersonal skills. Our inspiration is to test telehealth tech tailored for tech-savvy seniors. Through co-creation, seniors have realized the vital role of technology in their daily lives, particularly in activities they hold dear. Simultaneously, the younger generation, age 17 to 21, experience firsthand the value of human-centered tech deployment, gaining insights into the troves of wisdom seniors possess. This collaboration is spawning new intergenerational initiatives. Furthermore, the inception of AI in education isn’t recent. It began with the advent of intelligent tutoring systems some decades ago, predated even sophisticated models like ChatGPT or any other LLM. I was there. I’ve been 40 years seeing this development. AI’s educational journey has witnessed. various outcomes over the years. Moreover, this week, the Latin American Parliament, which is a body of decision makers, headquarters in Panama, has championed the establishment of the Office of the Future. Its mandate is to anticipate technological shifts and prepare for them collectively. Ethical AI discussions dominate its agenda. In testament to this commitment, a Latin American assembly will convene in Panama later this year, focusing on collaborative and ethical AI deployment. Now, there are some concerns, of course, because AI has its pros and cons. Independently of the AI variant is used because AI is a set of different methodologies. It’s not one AI, LLM is only one of them. Data remains paramount. In our age of data privacy, informed and arguably enlightened, consent becomes essential. The challenge lies in ensuring citizens have the AI literacy now, not only digital literacy, but AI literacy to discern when to share data and when to abstain. Trust lies at the heart of this dynamic, necessitating trust-certified system. AI is multifaceted. Applications and methodologies continually evolving. Keeping abreast of such rapid shifts is daunting. Therefore, I congratulate Brazil for the observatory they are building because it’s gonna be key for our region. It’s imperative that AI doesn’t erode quintessential human skills, like empathy, interaction with AI, regardless of its sophistication, doesn’t replace genuine human connections. So we must be careful with the use of AI, where we cannot hinder development.

Jennifer Chung:
in. Thank you. Thank you very much Victor and thank you also for reminding us really the importance of the multi-stakeholder participation both in the process of designing how AI should be and the governance and also in how AI should be used and there is a very fine nuance there and that it’s very important nuance we need to take and also reminding us that you know data is paramount when we’re looking at emerging technologies it is a set of technologies that do then of course very quickly manipulate all the data that we have. I’d like to stay in Latin America and go to another remote presenter Umut and he will be giving us the outcomes from the youth lack IGF as well as his own expertise on AI. Umut if you’re able to the floor is yours.

Umut Pajaro Velasquez:
Hello everyone, well as Jennifer will say I will be presenting mainly the outputs from the youth lack IGF that occurred this year in ECOS related to AI and emerging technologies. This is a summarized version of what the people presented during the three days of discussions and for me it’s quite a demonstration or a comprehension of the multifaceted challenge and prospect that AI and emerging technologies has. The main thing that we discussed here was the global inclusivity in AI governance specifically it was reiterated that AI governance should be transverse geographical boundaries and not be monopolized by the global north because if we want to create a resilient framework and promote a meaningful partnerships require the incorporation of perspectives and trace from the global south because the majority of the world lives in the global south. This is a way to ensure the development and regulation of AI technologies consider the diversity of needs and contents of these nations. Also was emphasized the critical integration or human rights approach into each phase of the AI life cycle. This entails emphasizing the rights of under-representing and marginalized communities. such as black, indigenous people, people of color, LGBTQI plus communities, women and children. It was considered that by your results in this rise, AI technologies will have the potential to empower and uplifting these vulnerable groups and not harm them as is happening right now, especially in some social media platform or another representations that AI is having. And so it was a consensus amongst the different stakeholders that were part of the other conversation during the July IGF, that AI regulation should not be a sole responsibility of governments, private sector, academia, acting independently because true progress in this aspect necessitates a cooperative effort among all the stakeholders engaged in the designing, developing, implementing and use of this technology because only through collective action can we effectively tackle the multiple challenge that AI is presented to us. And so some participants expressed their concern regarding the multiple dangers of AI. We emphasize in this information and the face and how this can have severe repercussions, particularly in political campaigns. Here in Latin America, the Glasgow strategy are used a lot. So the goal was to implement a strategy done to mitigate risk should be, that should be essential. a element of AI governance frameworks. That means that not only this could, should be a responsibility for the governments, but also from the private sector to tackle this kind of risks. In order to preserve the integrity, not only on the information, but also all the democratic processes. And so it was highlighted the environmental impact of AI and emerging technologies, emphasizing the necessity of addressing the consequences of widespread AI implementation. Moreover, it is so crucial that the advancement of this technology total increase the digital device, particularly in rural region and among indigenous populations. It is necessary to consider language barrier and cultural sensitivities to guarantee that these technologies are comprehensive and are made for everyone. This implies to have a universal acceptance from the design stage of these technologies. Finally, it was also addressed the need to review and strengthen the data protection rules, not only for AI technologies, but also for the future development such as quantum computing, because data plays a central role on internet and internet governance. So, robots were essential to secure, to secure not only privacy, but also a more secure evolving technological landscape as it is internet. And that was, in general, all the outcomes that came from the July IGF this year. So, thank you.

Jennifer Chung:
Thank you, Umut. That was a very wide ranging and very fulsome output that the YouthLAC IGF has discussed, all the way ranging from ethical guidelines, regulatory frameworks, to even the environment. and also really bring it back to the most important part, it has to be human-centric. So now I wanna move from Latin America, the Gulag region, all the way back to our region that we are in right now, which is Asia Pacific, and specifically India. So Kamesh, can you tell me a bit about, since I know that your day job is also particularly looking at these policies, especially with policies in AI, how can the development, design, deployment, and auditing of AI to be shaped to prioritize these ethical considerations?

Kamesh Shekar:
Thanks for that question, Jennifer. And some great points have come out from diverse regions. I try to not repeat myself and try to give some unique perspective here. I’ll come to India part towards the end of my intervention. I had some three points to add here. Starting with very well articulated question, I must say, in terms of the development, deployment, and auditing. So here, I guess, just coming from the status quo itself, we all know that there are various frameworks available out there which talks about different principles and what should be done and et cetera and stuff at different stages. But one aspect that is very important as we move forward, thinking about how can we make AI technology utilization more responsible, is to think a little bit from the intervention perspective itself, from an ecosystem perspective, where we do see that, when you talk about designing, at the designing level, there are key stakeholders that are involved, which starts from technology companies then. themselves. And then comes to AI developers. And AI developers are not necessarily the people who are actually deploying such technologies. And when it comes to deployment stage, there are AI deployers. And also, you have people who actually in the ground deploy. If I could give an example, you started with health care. Within health care, maybe there’s some tech company who’s actually developing a health technology, which is emphasized on AI. And that might be bought or brought by the hospitals. And ultimately, it has been operationalized or used by doctors or health care professionals. So across this chain, if you see, there are various stakeholders involved. And everybody has responsibilities to add towards ensuring at the ecosystem level when the AI is used, it has to be used responsibly. So figuring out those nuances is what I think is the way forward. In terms of making this technological use way better, nobody is denying that the technology brings out the most positive outcomes in most of the critical sectors. So just to make it even more work better, I guess looking at it from the ecosystem perspective is important. The second point I wanted to talk about is that some of the principles that are already available, like the human-centered AI principles, trustworthy AI, or explainability, and et cetera, and et cetera. So here, I guess to an extent, we are seeing consensus across the globe or within the domestic paradigms. There is a consensus in what are the principles we’re really striving towards, like the utilization of AI or the way it has to be designed and deployed and et cetera. But I guess a little bit more nuance, I guess NIST does a little bit of this work, is nuance is needed in terms of operationalization aspect, which I guess has to kickstart a little bit more when we talk about like, you know. If I could give an example, we talk a lot about principles like human in the loop, right? But that principle when it comes to operationalization across the AI lifecycle, they mean differently. So we need to bring such differences out and for different stakeholders, as I was mentioning in my previous point, also means differently. If I could give an instance, human in the loop as a principle, maybe at the planning stage might mean that you have to engage with stakeholders or like, you know, bring in the people who will be impacted by this. But at the same thing, when it comes to operationalization or in the actual operationalization stage, at that moment, maybe you need to have a human who’s actually also supervising whatever is like, you know, put out there. So we need to bring such nuances out and such that like, you know, it easily can be picked up who is like, you know, they’re developing or deploying the technology to understand their responsibilities and use it safely. Now coming to like my final point in terms of like, you know, where is like, you know, what is India doing and, you know, and everything and stuff. Like any other nation, like as obviously like, you know, India is also looking upon like, you know, how can we make this technology utilization to the max? But here, the nuance here is like, I guess like Victor also pointed out a little bit is that how can we balance this with innovation, right? Like AI is a cut through innovation and that’s going to, you know, pick up the, you know, at least the global South in terms of like how they are going to like, you know, excel in the future. So, you know, we need to find that balance and that’s what like, you know, India is like, you know, does it and they did a very good job there in the Digital Personal Data Protection Act, which came out like very recently where we try to like look two different aspects, which is both like, you know, how to use the value of data as well as like, you know, protecting the privacy. So similarly, something is happening within the AI ecosystem as well. Second point here within the sub point within this itself is that like, we all know that India is the chair for GP AI this year. So one of the, you know, key aspect, which has been like our minister has been also putting it out is that what could be a well-thought-through regulatory framework? Or let’s take it, and when we talk about regulatory framework, there’s also a connotation. Regulatory framework means compliance, and that’s going to come from the government. But the regulatory framework can be anything, right? It can be also market-enabling mechanisms that the government might be thinking about. So that is something that it’s going to be one of the themes which has been suggested through the GPAI that India will be sharing. And also there, I guess one key aspect, any of these forums or any bilateral or multilateral forum should be striving towards this consensus building. Because at this moment, everybody is doing something or the other when it comes to regulatory aspects and principles and et cetera. But one aspect that we see consistently is principles. So we need to hold to that point and try to see that can be a conversation starter at the global level for us to have a conversation that, OK, you also talk about this principle. I also talk about this principle. Can we come together? So I guess that is what is the key importance. And I think that’s also something that we might be seeing in the GPAI this year happening. So I’ll stop there, and I can come in later.

Jennifer Chung:
Really appreciate, Kamesh, you bringing in the flavor and nuance, especially with India becoming the chair of the GPAI later on this year. And I’m going to abuse my hat as a moderator just very briefly just to kind of add to the context from Asia-Pacific since my hat as the secretary of the APRHCF. This year, too, we had discussions on the contours of AI regulation. And I think I want to highlight and pick up on one of the points you said. There are so many different regulatory frameworks, and everybody is trying right now very quickly to develop good practices and best practices. I think the most important part is that this is somehow coordinated and somehow not so much. harmonized in the fact that we are actually leveraging and not redoing and reinventing the wheel as we move forward together. So, cannot stress enough the importance of multi-stakeholder collaboration. Just echoing what’s the SG said this morning, you know, having a high-level body to look, oversee this thing, as well as some very important and strategic collaboration both within the Asia-Pacific region and to all the regions around the globe. Enough about Asia for now. I would like to now move over to the Africa region and we’re going to be moving over to another online speaker, Pamela Chogo. If you are able to, we would love to hear a little bit more about the good discussions that came out of Tanzania and I guess in general also from the Africa region. Pamela, the floor is yours.

Pamela Chogo:
Yes, can you hear me? Yes, we can hear you, Pamela. Okay, thank you. As mentioned, my name is Pamela from Tanzania. I’m a researcher and a lecturer working so much around natural language processing. So, from our side, we had a long discussion during our idea in line with AI and what is happening. So far, we are enjoying the benefits, but at the same time, we see a lot of challenges around. So, in our context, unfortunately, we are also still working on pulling up the digital gap. So, we see that technology is moving so fast when you’re trying to pull up the digital gap, but then now we have to be talking about AI. And there is a bit of scare because the level of understanding of AI is still very low. Those in the digital space, some of the developers are not much aware of how to develop the AI solutions. But also when you come to the users, we have users who are using AI, but some of them or many of them are using it unknowingly. We don’t even know that they’re using AI. But on the other side, we also have the community at large, which is also contributing towards AI, but they don’t really know or they don’t really understand that in their daily activities, they are contributing to AI. So this calls up a long discussion and we see that there’s a great need for increasing awareness on AI because many see AI as a technical aspect, but looking at AI, it is basically a social technical aspect and it really needs to be looked upon on both sides. So on our side, we suggest that there’s a great need for AI advocacy and awareness so that people can understand more. And here, it will help us ensure there’s fairness, there is explainability, there’s robustness, transparency, and also data privacy taken into consideration. So we need to look at all aspects from the people’s side, like their general understanding of AI, but as the other speaker mentioned, considering other aspects such as culture, diversity, background in developing and the use of AI, but we also have to look at the process of developing the AI. For instance, when you talk of AI, you talk about data. So how are we gathering the data? Here, it comes to the issue of consent during data collection, but it also comes into the data collection process. How ethically are we doing this? But also the technical tools, a great understanding is needed on the models, the frameworks that are in place. And the developers should be in a position to share with us, be transparent, and tell us the models that they are using, the frameworks that they are using, and if possible, the reusability of these frameworks and models should be easily done. So as I mentioned, we emphasize on awareness creation, and it should consider all sorts of policy makers, developers, users, and the community at large. Ethics in data collection should be ensured, validity, and accuracy. And here I can share a bit of my experience as I started working with NLP. I actually did not get… So I had to read several documents to find the right path that would be ethical. So if we have something that can guide all of us, that we can all follow, then it would make the process more human. But also, what are the standards? For instance, when you train the model, you have the aspects of accuracy. So what would be the best standard? Is it 95% accuracy? Is it 100% accuracy? And what happens to that 5% or 2% that we have ignored? So maybe it might have effects in one way or another. So I’m happy that you briefly mentioned about what was discussed in the opening session. And you mentioned something about the code of conduct, and I am so happy to hear that. Because for me, my main suggestion was, the world should now look at AI as the way we look at the environment, and let’s say the way we look at climate change. And maybe come up with an AI convention, where everybody would have to adhere to, and it doesn’t matter if you’re in each part of the world, you have to align and follow. And I think this will create a safe space, where we can all enjoy the benefits of AI, without affecting anyone. Thank you. So that’s my contribution for now.

Jennifer Chung:
Thank you so much, Pamela. And thank you for also highlighting the importance of the different levels of development and awareness around artificial intelligence. We need to have, as Umut has already talked about, AI literacy. In addition to digital literacy, and all of that, AI literacy is also extremely important. The capacity building around that, and awareness around that is very important. Because AI, at the heart of it, is a tool for us to improve. of our human societies and how we learn to use this tool, how we are aware of when we can deploy this tool is extremely important. I want to wrap this first kind of opening remarks from speakers and end in the European region. And really for Joan, I’d like to ask a little bit more because I know EuroDIG talked quite a lot about AI and emerging technologies, but specifically on how the IGF and the Global Digital Compact are looking at this in particular. And I know Joan, you also have some slides and if you would like to share those, your floor is yours. Thank you.

Jörn Erbguth:
Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated with the University of Geneva. And EuroDIG has participated in the UN Secretary General Global Digital Compact process and has submitted one chapter on AI that you see here. And we have mentioned all of the issues, transparency, explainability, discrimination, human centered AI, data governance, data protection and privacy, data source. So I said, all around trust. And the sad truth is that at least some of them cannot be achieved with the current technology. We cannot have transparency, explainability. We don’t know whether systems will discriminate or, sorry, and we have still data protection issues. So these things will have to be dealt with or have to be solved in some way, but we cannot just say, well, let’s do some regulation and then we will have it. Interestingly. When the EU started to draft the regulatory framework for AI in 2021, they were not aware of large language models like Chet-GPT. And when they became aware, quite a lot changed the already drafted AI Act. And this means that we have to be aware that new applications, new technology advances will change how we need to regulate. And particularly also applications will require change there. So we see that we should agree on a global set of core principles, like that humans should ultimately remain in control, have oversight and should remain responsible. And I agree, of course, with Umut, that there is room for interpretation there and that we need to have a flexible model to act on new technological developments and applications. The multi-stakeholder approach and cross-regional dialogue are key for ensuring harmonization and offer support for cross-border use cases. Since some regions and state minds also have different concerns, different attitudes and culture, policymakers should be able to adapt quickly to these general principles through concrete instruments to their own situation. So one rule that is carved in stone and is valid for everybody will not solve it. So we need to have a flexible tool. And, for example, when you look at education, I think Eurotech has had a session on AI in education, LLMs in education, and we see children are among the most vulnerable population, but children will also be required to use new technology in the future. It doesn’t make sense to teach them the skills that were required in the past. They need to… work with new tools. Students need to be able to study AI. Research is essential and should not be restricted by regulation. Investment in educational programs and raising awareness is needed to help users understand AI technology, to understand the benefits of this technology, as well as the risk of this technology. Neighboring technology, like robotics, IoT, as well as future technology, like quantum computing, also need to be taken into account when they become available. Current ongoing regional and global initiatives on collaboration and information sharing should be supported. So it is, as was said already before, the multi-stakeholder approach here is very important and should not be replaced by one uniform regulation that everybody has to adhere to in the same way. We need to continue with the multi-stakeholder approach, particularly because technology is changing fast and is revolving. So I don’t want to speak too long. Thank you. And I will be looking forward to the further discussion.

Jennifer Chung:
Thank you so much, John. I think EuroDIG did a very comprehensive discussion, especially on the implications of AI. I think we can all agree that having core principles when we’re looking forward to creating a regulatory framework or any kind of framework is extremely important. And you’ve also jumped ahead to our discussion, hopefully, that we’ll have right now with the audience in the floor and online about how the NRIs can commit to actions. And I think EuroDIG should be sharing this already, have probably shared this already to the NRI network, but this is something that we can probably build on and actually implement as well, I think. So now I’d like to open the floor to any questions we have. I already know that there is a question from the Bangladesh Remote Hub that they’d like to take the floor. But anybody in the audience, if you do have a question, I think we’re able to, I’m not sure if there’s any roaming mics around or if there’s a mic. I think I see a mic stand over there. So I think perhaps if you do have any questions or any suggestions or if you want to also share your perspective from your region or your jurisdiction, I think that would be very helpful. So first, I’d like. to see if we can give the mic to the Bangladesh Remote Hub, if you will allow them to unmute themselves and ask their question. If you’re able to give Bangladesh Remote Hub co-host rights so they can unmute. Thank you. Thank you.

Audience:
Thank you, respected moderator, Ms. Jennifer Chang, for giving me such a golden opportunity to place my question in this important forum. I am Shainara Begum from Bangladesh, working as an individual consultant in the center. Today’s topic is very important, artificial intelligence, which is a field combined with computer science and robust data sets for problem solving. But I think it could be a very strong instrument to cross the boundary for learning and doing our daily basis work very quickly and hassle. My question is that, in the developing countries, in the context of our socioeconomic status and other scenarios, most of the people are far behind from the internet connectivity and the electronic devices. In this context, how we can be benefited from these services that is avail the artificial intelligence services? Thank you. Thank you. So my dear, another participant has a question. Can you hear me? OK, from the same room. Yes, you can ask your question as well. as well. Please go ahead. I’m so sorry. The second question was not quite audible.

Jennifer Chung:
If you could either repeat the question or type it in chat, I’ll be happy to read it out. Can you hear me right now? Yes, we can hear you. Yes, I’m repeating my questions. My question is totally youth-centric. That is, how we can use artificial intelligence in education for re-skilling and up-skilling for youths in the developing country? So we’re also relying on the captioners to capture the question. I think it said how we can leverage artificial intelligence in agriculture in developing countries, and education as well, right? Just confirming that is your question?

Audience:
Okay. How artificial intelligence can emulate the opportunity of re-skilling and up-skilling for youths in the developing countries?

Jennifer Chung:
Now we hear. Yes, up-skilling for youths. So there’s two questions from Bangladesh Remote Hub. One is regarding, you know, if the developing countries, there are citizens who still, there are issues about connectivity, how can they leverage and benefit from artificial intelligence? The second one is regarding the up-skilling of youth in this respect. I see somebody in the line. So we’ll take this question, and then we’ll go to the panelists. Please go ahead.

Audience:
Yeah. Thank you, Jennifer Ponsley from Gambia NRI. My question, I want to reference my colleague from India in regards to use of generative AI. and data governance, how do you think at a local level with our national NRIs we can be able to use that to impact the growth of digital governance in our respective countries? Thank you. Thank you Ponslet. So do we have any panelists

Jennifer Chung:
who wants to answer the first two questions? If not, I think I’ll go directly to Kamesh to answer the third one since it was directed to you and then we can go backwards to the other two questions. Kamesh. Thank you so much. I didn’t quite catch his name but yeah thank you so much for that question and

Kamesh Shekar:
I guess data governance is very close to me and yeah so like that’s a very interesting question because like you know sometimes we think these legislations are only catering to specific kinds of technologies but they might not. How the you know a legislation like Digital Personal Data Protection Act 2023 which is very very new for India which we just got in past is going to be applied for within the AI technologies is also from I can’t give much experiences from India because like we just have an act and like we will be going forward and like you know we will be seeing it but one aspect which has been like you know specifically like important when we talk about data governance and artificial intelligence is like how way forward can we use publicly available personal information. So that is a very key aspect and like that’s also something that we have to be like you know talking about because at this moment whatever the innovations or technologies that AI any any algorithms we have and AI is to scrape data which is available out there to provide the service. But how such technologies can be seen, used, a way forward with the data protection regulation in a place is something we have to look. So another aspect, as you were asking about some learnings, as I said, in India, we still have to learn. But one learning, at least from globally, is that is consent used as an artifact for utilization of or processing of the data? So I guess with emerging technologies like artificial intelligence and IOTs and et cetera, I think there’s a crucial question for us to answer right now is, is consent the way forward? So we need to also start thinking, as we move forward and as the technologies are evolving and emerging, we need to also figure out some new ways in which we can safeguard our data, where I guess certainly consent or any other older mechanisms, obviously, there is merits to them. But we also have to evolve and have more options. Because I can’t really see such an artifact can be applied in a generative AI kind of a situation. So I guess that’s one learning for any new jurisdictions moving towards a data protection regulations to consider more innovations within regulations, which could actually also work in tandem with the innovations that are happening in technologies. So that’s my answer. And I hope that helps. Thank you.

Jennifer Chung:
Let me try to answer the first and the second questions.

Tanara Lauschner:
I think offline AI applications can be utilized, such as language translation tools and. health diagnostic apps that function without continuous online access, but it is difficult to imagine how people that don’t have connection and don’t have money to assess this kind of devices can improve their life with these solutions. But in agriculture, I don’t remember the name of the man that speak, AI can enhance farming methods by predicting weather or potential threats with finding reliant farmers during sporadic internet availability. Furthermore, community centers and remote areas might host AI-driven services offering local residents insights from educational to health-related applications. Even without consistent online access, AI’s transformative potential can still be harnessed in diverse and impactful ways. Thank you. Thank you, Tanara. That is extremely important when we’re looking at developing

Jennifer Chung:
countries about, you know, what is the benefits when we’re still looking at problems of access, when we’re still looking at problems of connectivity, when we’re still looking at actually getting people online, how can they benefit from AI? I’d like to turn also to get a little bit of a response from Pamela as well, since she mentioned specifically from Tanzania, the need to have this capacity building and awareness because there are already other issues and problems that are facing the community there. So Pamela, maybe a little bit of a response from you regarding that? Okay, thank you.

Pamela Chogo:
Yes, as I mentioned earlier, and I think as we all know, most of Africa’s countries are still struggling with resources, but I see AI being of greatest benefit, I would say, bringing up community solutions. So it might be difficult as an individual to access a certain service, but this service through AI can be beneficial in a community context, for instance, in health or in agriculture. You can have an AI solution that, let’s say, can help in prediction, as mentioned, in agriculture, or can also be used in health sector. Now, it is not important for an individual to have this device for AI to assist, but this device can be present in the hospital, and hence can solve the problem of lack of experts as well as lack of other resources. For instance, in my test, in my studies, I am developing a chatbot to be used in agriculture, and the problem I am solving is lack of extension opportunities. So with this particular chatbot, it can be used in a community where they can access the information, they can access the knowledge collectively. So yes, we have challenges, but I believe with what we already have, we can still get benefits out of AI.

Jennifer Chung:
Thank you so much, Pamela, that is extremely important as well to keep in mind. For the question about the upskilling for youth, if I could ask Umut to elaborate a little bit more specifically, since you had some very good learnings coming out of Youth Lack IGF. What should we do to upskill youth to, you know, the dangers, or the benefits, or even just the general… general use and tools for AI and generative AI, actually, Umut?

Umut Pajaro Velasquez:
Well, when it comes to youth, as was mentioned before, especially because if we are in the global south countries, we pretty much know all of us have access to these technologies. That’s probably part of the question was also about how we benefit from this kind of emerging economies when we start to use AI technologies. So for me, mainly, it’s to find ways to make more access, because AI can be accessing in a way that can be more like engaging, and also provide more like a hands-on experience to you so they can understand how the tools are being actually applied in the real life and how can they benefit from it, and how they can actually enrich the processes in I don’t know, in spaces like work, education, and another like that. Because we know there are risks, but we can also know there are benefits. So we can should focus on that aspect of AI too, but for that, we need to know how to use the different tools, and for to do it, we have to be with them in whatever they like. And so for another way to, or skills, or abilities in AI is to reach with people that already manage this. kind of tools and learn from them and also finally what I can say is when we try to scale ourselves it’s better if we do it as a community because AI is a collaborative field and it’s important to create this kind of community especially when we are joined so we can learn and help from each other and we can do it in several ways like I don’t know online or probably just developing some kind of hackathons when you have several people with different expertise trying to understand how a tool is being used or how you can improve that thing or that tool so yeah there’s several ways you can do that and benefit not only in the educational aspects but also in other aspects in our lives.

Jennifer Chung:
Thank you so much Umut. I also want to pose these questions actually to Victor and John if you wanted to respond on any specific questions that were asked regarding either the upskilling of youth or the other question on how you know global south and developing countries can actually benefit from AI and that’s besides the actual directed question over to Kamesh but we’d like to hear from you too if you’d like to intervene. Yes please. If I may? Yes please go ahead.

Jörn Erbguth:
Okay there was a question about data governance and data governance is not only data protection but there is a specific data governance and the EU for example has passed a data governance act and when you look at AI it is important that there is no monopoly on the training data and this has been addressed by the EU data governance act and of course this is also very important for developing nations, for small and medium enterprises, for startups, that there is no monopoly on the training data. And so, big companies are requested to share data that can be used for training purposes. And also, when you look at copyright, if the copyright is being extended, then this might be a barrier to using freely available data for training purposes. So, it is very important to have an equal access to AI technology, that there is no monopoly on the data. And this is something that exists besides data protection. Data protection is about personal data, and this training data does not have to be personal data, but it can be any kind of data that is necessary for training AI systems. And I think it would be very important for the Global South that they too can access this data, and that there are no barriers to access the data to train systems. Well, may I? Please go ahead, Victoria.

Victor Lopez Cabrera:
Okay, I’m going to address the last one and then go upwards. I think it’s wonderful having datasets available and not having a monopoly. The situation is that the Global South has to provide data also, so that the algorithms can actually model our specifics in our countries with our context. I can pinpoint an example. I mean, if you have, like, biomarkers for elderlies, they are not the same ranges if you are in Brazil, you are in Panama, probably will be the same, but if you are in Europe or North America, the nutrition factors, the culture will affect the ranges. Medical doctors have to decide how to do, for example, automatic telemonitoring of health for elderlies, and we saw that problem because we were trying to do some diagnostics, and then the ranges were not exactly appropriate for people who were in the countryside doing agriculture because they are not the same, you know, it’s not the same body, not the same metabolism. So the South and the developing countries need to contribute more data in order so that the data sets are enriched and actually there are no biases, more biases. That’s one thing, and I don’t know what would be the best way to do that, but I know the need because you need to do that so that the probabilistic models of the, I mean, computations of the models can be adjusted. In terms of the digital divide, well, that’s going to be a really hard one. In my opinion, Panama has only 4 million people, it’s a very tiny country, very small, you can go all over the place, and still we have a divide, especially after the pandemic. Those who do not have devices, those who do have devices, but they do not have access to internet, and those who have none. It’s a really, really nasty problem that governments and private sector, it’s not a thing of governments, it’s a society as a whole has to establish a way to do that. Computationally, if I may say so, you might work with AI that is connectionless, which means that at some point you can do some work with AI at your cell phones, you know, those small tasks to do some diagnostics or something in the countryside, and whenever you get to a place where you can connect, then you update, and then you can do some more work. So, just working at the edge, you know, computing at the edge, trying to do some work over there and then when you connect, then do that. You have to work on communities. For example, in Panama, in the indigenous people, actually the Internet Society is working establishing networks, community networks, so that they can have access to Internet. And then, along with that, will come telemonitoring, will come e-learning, will come the other goodies. But it has to be a collective effort and I’ve seen working at the community level, I mean, we have to be part of the solution, not part of the problem. And with the AI for skilling and upskilling with my own students, they are in first year, I didn’t teach them chat GPT, they learned chat GPT by themselves. I mean, by the time they came, I teach a student, freshmen students, first year, first time at the university. They already knew that, but they didn’t get that at the school, which means there is a fresh mind that you just have to be a mentor. We need less professors and more mentors, people who actually share the learning experience and let them grow. Of course, guiding them. And in terms of a skilling, well, they learn, they are learning some, I put them to work with the elderlies and both were learning chat GPT. Can you imagine a person from 85 years old learning chat GPT from a 17 years old? The kind of dynamics that you can get out of it. So the upskilling and reskilling is not only the technical capabilities, it’s actually that you are getting how to be a better human being using technology to teach another human being how to use technology. That’s my take on that.

Jennifer Chung:
Thank you so much, Victor. I’d like to still see if there’s any more questions, both from the floor and also… So online, I’ll give, oh, Nazar, if you’re able to go over to the mic.

Audience:
Sorry, I came in late. But I have a question which I think would have a lot of interest to the key players in the industry. What do you think, what are the considerations for regulation? Not over-regulation, but regulations, because I know whether it is generative or the other side of the artificial intelligence, ultimately each country would at some point, you know, make regulations. And what, as professions in this field, what sort of considerations should the policemakers do when doing the regulation for the regulations for the artificial intelligence? Thank you so much.

Jennifer Chung:
Thank you, Nazar. I actually do see two more questions from the Bangladesh Remote Hub. This is good. This is exactly when you should be asking the questions. I will read them out. The question one is, most of the people in developing countries are far behind in internet connectivity and electronic devices. How can we benefit or enjoy AI services and facilities? I think this question was already answered. Oh, and the second one was also answered, too. Actually, it’s just Nazar’s question. So I think both Kamesh and Jern did touch on the regulatory framework. So if I I could touch on both of you first to intervene and then we’ll go to the concluding remarks for the speakers. Kamesh? Thank you so much for that question.

Kamesh Shekar:
And I will also keep some points for the concluding remarks in terms of like, you know, what’s the way forward. Just like targetedly answering to that question is that is any consideration at the regulatory level has to take into consideration the innovations and the positive angle that the, you know, the technology is bringing out. So what kind of a regulatory lever are we moving towards in terms of like taking any interventions has to consider that like, you know, that is A, it’s like implementable in terms of like, you know, understanding the nuances of the emerging technologies and it has to be implementable. And B, it is that like it should not disrupt. So because like, especially in the developing countries, like, you know, such an innovation, like now becoming, like trying to like solve a lot of your, you know, traditional problems in critical sectors. So let’s actually like try to look at like, you know, one side, like it’s solving the problems, but in terms of like also making, while it solves the problem, it should not create problems. So how can we make those checks and balances at the balanced level is what like, you know, has to be, you know, done. But I’ll like come in and like, you know, in the concluding remarks more.

Jennifer Chung:
Thank you, Kamesh. And Jun, your take on the regulatory frameworks?

Jörn Erbguth:
Well, the approach the EU takes is a risk-based approach, meaning regulate partially where there’s high risk and regulate almost not where there’s no or almost no risk. Of course, this is difficult to assess what kind of risk is really involved. In particular, when you see systems like LLM that can be used for fun with no risk or for serious purposes with quite a considerable risk. So I’m not sure if this approach will be the best approach, but at least I think it’s a reasonable approach to start with. So look at the risk, look at the applications, and then regulate the applications using AI and not regulate the technology per se that can be used in quite different ways.

Jennifer Chung:
All right, Jen, thank you so much. It is really important to also note that the EU specifically is more advanced in looking at the regulatory framework, so it is good to also see the comparison between the regions as well. I’m not sure if Victor, Pamela, or Moot, if you would like to also intervene on that regulatory framework question. If the answer is no, maybe if we can have a last call for any burning questions from the floor or online. I think the answer is no. So maybe if we can ask our speakers to really give us some, you know, what the NRIs can do, like the actions that we can take forward, or really just concluding remarks, what is the main takeaway you would like us to remember coming out of this session? If we can start with Victor.

Victor Lopez Cabrera:
Well, I think NRIs are doing exactly what they should do with these sessions and with the opportunity to distribute and explain what is happening and see the actors. In one thing, explainable, for example, explainability, was a characteristic of the expert systems of the 70s and the 80s. And the researchers at that time said that if a computer cannot explain its behavior, people will not trust it. So it’s not something new. Because human beings do not trust another human being that cannot explain why and how. But the problem with LLMs is that they are black boxes, and due to its mathematical and computational nature. And finally, I would say that the responsibility of being explained by research organization and academic institution to help the citizens to understand what are the shortcomings, what are they good for, and just fade away a little bit from Hollywood. Because sometimes people just get scared just because they think the world is going to come down to something like the Matrix, and it’s not going to happen near soon. But at the same time, we have to explain that you have to take some responsibility of what they do with AI. Because not only who develops, it’s the one who uses it, and uses it well.

Jennifer Chung:
Thank you, Victor. Yes, everybody needs to take responsibility for that. Pamela, your concluding remarks and takeaways? Thank you.

Pamela Chogo:
What I would wish to say is just reminding each other that AI is a global asset, so we should not look at it as a threat, but rather learn and work together to ensure that we enjoy the benefits it brings. So let’s have more discussion forums. Let’s share our work and build the AI we want. The AI Code of Conduct or AI Convention is very, very important. Thank you.

Jennifer Chung:
Thank you, Pamela. Yes, that is very important, building the AI we want. Umut, your takeaways and any commitment to action?

Umut Pajaro Velasquez:
Well, my invitation is not only to continue with these spaces or when we can have this kind of discussion and share what is going on in our region on a national level, but also to commit ourselves to share what we discuss inside of the intergovernmental spaces to our government or governments or to the final users of these kind of technologies. Because sometimes we are missing that kind of aspect, the human aspect, not in the user. of these technologies. We always talk about that we need to have a human-centered AI, where the human-centered AI is star for focusing in the users of these technologies.

Jennifer Chung:
Thank you, Umut. John, if I would turn to you, I know there’s a comprehensive call to and commitment to action, but your takeaways as well?

Jörn Erbguth:
Well, I would like to stress that flexibility is key, because we don’t know what applications will be there, we don’t know what practical risks there will be, so we really need flexibility. And to give you one example, explainability, we said in the discussion that explainability is not really there. So you could either resort to some fake explainability, just some general explainability that does not explain the decision that you are presented with, or you could look for some different mechanisms that solve maybe at least a little bit of the problem. For example, you could give the system to the users and tell them, well, you can play around with it, you can see which parameter you need to change to get to a different decision. So you don’t know really why the system has reacted, but you know what would have needed to be different to get a different outcome. And I think this little example shows that we really need to be creative, need to look at flexibility, and we need to adjust regulation. And the EU has been trying to be at the forefront of regulation. and trying to regulate technology before it’s there. And this approach has limits and we need to be flexible. We know we need regulation now, but we also know that we don’t really know how the regulation needs to look like in 10 years.

Jennifer Chung:
Thank you, John, for the reminder to be flexible. That is extremely important. Tanara, your concluding remarks and just main takeaway.

Tanara Lauschner:
Thank you, Jennifer. I also believe that discussions like these are really important for us to move forward to develop a community-driven governance for safe AI. In the last edition of the Brazilian IGF this year, discussions on criteria for human review of automated decisions, the intersections between AI, privacy and data protection, digital sovereignty, et cetera. We discussed all these issues on the Brazilian IGF and also took place in Brazil this year, the Lusophone IGF, yes, where we discussed the implementations of the Portuguese language in AI models and data sets and vice versa. In this sense, we should commit to fostering more debates and actions, both within the Brazilian IGF community and among NRIs as a network. I think international cooperation are essential steps in ensuring AI inclusivity on a global scale. The trust is really important. We need to trust in AI systems, but for that it is necessary to know how they work, what they do and what they don’t do. And what we don’t want they do. When discussing. AI, we must ensure it’s used to enhance our digital landscape responsibly, with guidelines that prioritize human well-being and involve input from all stakeholders. Thank you so much.

Kamesh Shekar:
Thank you, thank you so much for that, especially the point on trust. I guess like that’s the key and like you know what they don’t want to do is also important. Adding to like some of your points itself is that like my final remarks would be like two important things or the takeaways that we have to do is that we need collaboration and coordination. When I talk about coordination, I think like there’s a lot of coordination needed at like you know a global level, like various entities have to come together who have an interest in like you know taking forward these conversations but like I think like the only interest that we need at this moment is like how they all come together so and like have a conversation and where that conversation starts and I guess like that’s going to be very revolutionary. And second thing when I talk about collaborative, I think like you know especially your point and also somebody from online also mentioned, is that collaborative is important that the public, when I talk about public, is like how the government and the privates can come together, right. Like sometimes we have been talking a lot about the regulatory frameworks and etc and stuff where you know legislations or rules and guidelines and etc like one way of looking at you know regulatory frameworks but like how can also one of the mandates or like one of the ways the you know one of the principles of like the governments and like etc is to also ensure market works, right. So for that like you can also use market mechanisms where as we move forward like all of like I guess like all of this conversations become fruitful if the you know the end stakeholders who are the developers and deployers use such you know frameworks. For that I think there is a need for a buy-in from such stakeholders itself. I guess like one way of doing it is compliance but that’s a burden especially for a technology which is like you know still evolving and like especially in the developing countries. So we need to figure out a way in which like we make such governance frameworks are picked up by the market by themselves right like where they start seeing a value proposition a competitive advantage in doing as you mentioned the trust right. Trust could be one of the like you know aspects where like they start seeing that as a value proposition and maybe becoming a using a responsible AI framework or such principles can bring trust. I think that link as like how we actually like show the pathways sometimes like when we talk about like regulations and like frameworks it’s always negatively connoted that like you know they are like going to bring burden and compliance. But I think the conversation of the you know the nuance behind this has to be shifted such that we see this as like something that has to be picked up for your better in the form you know on the long run as well. So I guess like these are the you know the key takeaways. Thank you. Thank you so much Kamesh. I think they have concluded much

Jennifer Chung:
better than I can encapsulate but I will end with this. There is a need for trust. There is a need for flexibility in the regulatory frameworks and developing such things and the most important part is the multi-stakeholder participation both in designing this process and also implementation and actually deploying the use. So thank you all for your time. Thank you to all of the NRI colleagues for giving us all of these best practices and good learnings and thank you. Anya can I check with you if you wanted a picture for the NRIs here or that is not something that was requested? Okay if we can quickly come and take a picture and we can end the session. I mean we have ended the session but we’ll take a quick picture. . . .

Audience

Speech speed

109 words per minute

Speech length

408 words

Speech time

226 secs

Jennifer Chung

Speech speed

162 words per minute

Speech length

3319 words

Speech time

1228 secs

Jörn Erbguth

Speech speed

135 words per minute

Speech length

1293 words

Speech time

574 secs

Kamesh Shekar

Speech speed

188 words per minute

Speech length

2593 words

Speech time

826 secs

Pamela Chogo

Speech speed

152 words per minute

Speech length

1142 words

Speech time

451 secs

Tanara Lauschner

Speech speed

112 words per minute

Speech length

1027 words

Speech time

548 secs

Umut Pajaro Velasquez

Speech speed

128 words per minute

Speech length

1165 words

Speech time

547 secs

Victor Lopez Cabrera

Speech speed

143 words per minute

Speech length

1647 words

Speech time

689 secs