GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Alan Paic

The Global Partnership on Artificial Intelligence (GPAI) is an international initiative focused on promoting the responsible adoption of artificial intelligence (AI). Established in 2020, GPAI currently has 29 member countries. Its mission is to support and guide the ethical and trustworthy development of AI technologies.

GPAI operates through a multi-layered governance structure comprising a ministerial council, executive council, steering committee, and a multi-stakeholder experts group. The ministerial council convenes once a year, while the executive council meets three times a year. The current lead chair of GPAI is Japan, with India set to assume the chairmanship in the future. This multi-level governance approach ensures that decisions are made collaboratively and with diverse perspectives in mind.

Project funding for GPAI is obtained through various mechanisms. Initial funding was provided by France and Canada, with additional contributions coming from GPAI pooled seat funding. In-kind contributions from partners and stakeholders are also welcomed to support project funding. This approach allows for a diverse range of contributions and promotes broad participation in GPAI initiatives.

GPAI is actively involved in a global challenge aimed at building trust in the age of generative AI. In collaboration with multiple global organizations, GPAI has structured the challenge into three phases: identifying ideas, building prototypes, and piloting and scaling. This global challenge seeks to address the proliferation of fake news and the growing threats to democracies. By fostering trust in generative AI, GPAI aims to ensure that AI technologies contribute positively to society.

Alan Paic, a strong advocate for GPAI, provides an in-depth overview of its governance, membership, and initiatives. His support for the project reinforces the importance of responsible AI adoption and the need for international cooperation to address the challenges associated with AI technologies. Paic also promotes the upcoming global challenge, highlighting the importance of building trust in AI systems.

In addition to GPAI, the Global Partnership on AI (GPI) has made significant contributions to the field of AI. GPI has played a pivotal role in regulating detection mechanisms in AI companies, emphasizing the importance of accountability and transparency in AI technology. The impact of GPI’s efforts is evident as some countries have incorporated GPI’s guidelines into their legislation.

Looking towards the future, GPI envisions becoming the global hub for AI research and resources. To achieve this, GPI aims to pool together global resources and expertise in AI. By bringing public research institutions together and collaborating with international networks, such as the worldwide LHC computing grid, GPI seeks to enhance understanding and advancements in AI technology.

In conclusion, both GPAI and GPI are major international collaborations that aim to promote responsible AI adoption, build trust in AI systems, and address the challenges posed by AI technologies. With their multi-layered governance structures, project funding mechanisms, and involvement in global challenges, these partnerships are crucial for shaping the future of AI in a responsible and ethical manner.

Audience

The analysis of the speakers’ statements reveals several important points regarding the Global Partnership on Artificial Intelligence (GPA) and its work. During the meeting, the GPA presented and discussed its work streams, generating significant interest. It was particularly noteworthy how these work streams were mapped against the Hiroshima commitments, underscoring the relevance and alignment of the GPA’s activities.

In addition to the mapping exercise, there was a request for insight into the GPA’s future work and thoughts on partnerships. This emphasizes the need for ongoing collaboration and clarity regarding the GPA’s direction and objectives. The speakers expressed a neutral stance on this matter, seeking more information and guidance.

The GPA’s efforts to address concerns and challenges in the field of artificial intelligence were highlighted. This includes ongoing interactions with a council that funds various projects. The council funds both ongoing and completed projects that aim to advance AI, with reports on project progress available on the GPA’s website, ensuring transparency and accountability. Additionally, the GPA seeks advice from experts in various fields to ensure the quality and relevance of their projects.

Gender diversity and equality in AI emerged as significant concerns during the meeting. Paola Galvez, a gender advocate, questioned the presence of activities related to creating diversity and gender equality in AI. This raised an important point about the need for inclusivity and addressing the gender gap in the field.

India expressed optimism about leading the GPA in the future and raised the question of whether there would be an initiative to bridge the gender gap in AI. This indicates a willingness to take action and promote gender equality within the GPA’s activities.

Peru, as the first country to have a law on AI for social purposes, expressed interest in becoming a member of the GPA. This demonstrates the broader international appeal and recognition of the partnership’s significance in advancing AI policies and governance worldwide.

Slovakia, a non-member of the GPA, is considering membership and seeks further information. Specifically, they are interested in understanding the specific regulatory support activities of the GPA and how non-members can participate in the upcoming India summit. This suggests a growing interest and potential expansion of the partnership’s membership.

The analysis also highlighted the issue of fragmented and limited public sector research on AI. The majority of research and development is concentrated in a few large private companies. This underscores the need for increased collaboration and knowledge sharing between the public and private sectors to ensure a more comprehensive and well-rounded understanding of AI.

The Global Partnership on AI (GPI) aims to address this fragmented approach by pooling resources and establishing partnerships with other international networks. The goal is to leverage the collective expertise and resources of all countries to have a greater impact on AI research and development.

Civil society and think tanks expressed keen interest in participating in the upcoming summit, showcasing their desire to contribute to the discussions and exchange of ideas. This indicates the increasing recognition of the importance of diverse perspectives and input in shaping AI policy and governance.

Finally, Ben, an advisor to the Westminster Foundation for Democracy, noted the challenges and opportunities posed by AI in election processes in the Indian presidency. This highlights the need for careful consideration and the development of strategies to address potential risks and harness the benefits of AI in these critical democratic processes.

In conclusion, the analysis of the speakers’ statements reveals various important points regarding the Global Partnership on Artificial Intelligence. The mapping of work streams against the Hiroshima commitments generated interest, while questions were raised about future work and partnerships. Gender diversity, membership expansion, public sector research, civil society involvement, and AI in election processes were also discussed. These insights emphasize the need for collaboration, inclusivity, and thoughtful governance in shaping the future of AI.

Kavita Bhatia

During the discussion, the speakers highlighted India’s vision for artificial intelligence (AI) and its potential to drive social and economic growth. They emphasized the importance of AI in bringing efficiency to administrative procedures, which in turn could contribute to economic growth. By automating various tasks and processes, AI has the potential to streamline operations, increase productivity, and foster innovation.

Furthermore, the speakers discussed how AI could empower citizens by providing them with easier access to their entitlements, thereby contributing to social growth. AI has the potential to bridge gaps and provide services to citizens more efficiently, improving their overall experience. This inclusivity was seen as crucial, particularly in a country like India that boasts a diverse linguistic landscape. The speakers stressed that AI should be inclusive and enable citizens to access services in their vernacular languages. In support of this, they highlighted the creation of a multi-modal AI platform called ‘Bhashani’, which facilitates speech-to-speech machine translation in multiple languages.

The discussion also delved into the significance of skilling initiatives in preparing for an AI-driven future. Efforts to inculcate AI knowledge at the school level were mentioned, underscoring the need to equip students with the necessary skills and knowledge to navigate the evolving technological landscape. The availability of financial support for PhD students in the field of AI further highlighted India’s commitment to fostering expertise and innovation in this domain.

The need for a Global Partnership on AI (GPI) was brought to the forefront during the discussion. The speakers emphasized the importance of GPI as a central point of contact for AI-related information, standards, and frameworks. India’s involvement in the GPI was highlighted, with the country taking the lead chair and hosting the upcoming summit in December. The aim is for GPI to have an independent identity, similar to that of the World Health Organization (WHO) in the field of health.

Finally, the speakers emphasized India’s AI approach of democratizing access to AI resources. This involves streamlined access to high-quality datasets, which are vital for research and innovation. Additionally, India aims to ensure access to compute power and skilled resources, acknowledging the significance of these factors in driving AI development.

Overall, the discussion highlighted India’s comprehensive vision and approach towards AI. By focusing on inclusive AI, skilling initiatives, global collaborations, and democratizing access to resources, India aims to harness the potential of AI to drive social and economic growth while reducing inequalities. The insights gained from the discussion underscore the need for a holistic and collaborative approach towards AI adoption and development.

Inma Martinez

The Global Partnership on AI (GPAI) has played a pivotal role in advancing the field of Artificial Intelligence (AI) between 2015 and 2018. During this period, AI experienced exponential growth and brought about significant advances in various areas, including neural networks, language models, computer vision, AI-driven drug therapy development, and level 5 automation in cars. These advancements have had a transformative impact on society.

GPAI emphasizes the importance of responsible and trustworthy AI. As AI technologies continue to evolve, there is a growing need to ensure that their development and use adhere to ethical principles and best practices in data governance. GPAI also recognizes the significance of fostering innovation in the future of work, highlighting the need to address the challenges posed by AI and promote responsible practices.

In addition, GPAI promotes the deployment of AI for industry and enterprise applications. Through a project that supports small and medium enterprises, GPAI assists these organizations in identifying suitable AI solutions for their challenges and finding local AI solution providers. This initiative aims to enhance the competitiveness of these enterprises by leveraging AI technologies.

GPAI also addresses concerns about intellectual property rights in AI. The organization has a project dedicated to this issue, recognizing the importance of creating a framework that protects and encourages innovation in AI while providing mechanisms for intellectual property rights.

The proposal to establish an expert support center in Tokyo has received positive feedback. This initiative aims to strengthen the support system for experts involved in project-based activities. Once approved, this center will provide valuable resources and expertise, further enhancing GPAI’s capabilities.

GPAI actively seeks partnerships and values decentralization to bring in as much external expertise as possible. By collaborating with research and innovation centers and specialists from various fields, GPAI ensures diverse perspectives and a multi-stakeholder approach in addressing AI-related issues.

In terms of regulatory activities, GPAI plans to organize workshops in an incubator style, covering topics such as contract laws and AIIP. These workshops, led by renowned expert Lee Tidrich from Duke University, seek to bring together specialists and encourage the exchange of knowledge. AI scientists and practitioners from any country are invited to contribute to these regulatory activities.

While acknowledging the risks associated with generative AI for democratic countries, GPAI remains driven by shared democratic values. This emphasis on democratic principles further strengthens GPAI’s commitment to addressing the challenges and ensuring responsible AI deployment.

GPAI’s projects encompass responsible AI and data governance to enhance democracy and protect human rights. The organization actively works on initiatives such as human rights projects related to data governance. By focusing on these areas, GPAI aims to utilize AI for the betterment of society, welfare, and the creation of equitable opportunities.

Overall, GPAI’s efforts in advancing AI, promoting responsible practices, supporting industry applications, addressing intellectual property concerns, establishing expert support centers, promoting partnerships, and safeguarding democratic principles demonstrate its commitment to creating a beneficial and ethically-driven AI ecosystem.

Yoichi Iida

The Global Partnership on Artificial Intelligence (GPAI) is an international collaboration aimed at promoting the responsible deployment of AI technology in society. GPAI has focused on a range of topics, including responsible AI, data governance, the future of work, and commercialization and innovation through AI technology. This comprehensive approach demonstrates GPAI’s commitment to addressing various aspects of AI and its impact on society.

To facilitate the exchange of ideas on the implementation of AI, GPAI has organized over 20 side events, providing a platform for experts, researchers, and stakeholders to come together and share their insights. These events have played a crucial role in promoting dialogue and knowledge-sharing among different actors in the AI ecosystem.

The collaboration between GPAI and other international streams has been deemed vital for achieving effective AI governance. Discussions on AI governance have been integrated into the G7 agenda, highlighting the importance of addressing the risks and challenges associated with AI on a global scale. This collaborative approach ensures that diverse perspectives and expertise are considered in shaping policies and frameworks for responsible AI development and use.

Recognising the need to strengthen GPAI, Yoichi Iida, a key advocate, believes in the significance of establishing an expert support centre in Tokyo. This centre would serve as a valuable resource by providing expert-level assistance to GPAI’s initiatives. It is noteworthy that the government is actively involved in supporting this proposal, both financially and through providing necessary personnel resources. This commitment further emphasises the importance placed on GPAI and its mission.

A proposed third expert support centre in Tokyo would operationalise the strengthening of GPAI. This new centre would play a crucial role in implementing projects and promoting the visibility and awareness of GPAI’s activities. Through this initiative, Yoichi Iida aims to enhance the understanding and perception of GPAI’s work, both within Japan and internationally.

In conclusion, GPAI is at the forefront of promoting responsible AI technology deployment in society. With a comprehensive focus on various aspects of AI and its impact, GPAI has facilitated knowledge exchange through side events and engaged in collaborative efforts with international partners. The proposed establishment of an expert support centre in Tokyo further reinforces the commitment to strengthen GPAI. Overall, Yoichi Iida’s efforts highlight the importance of responsible AI development and the need for global cooperation in shaping its governance.

Abhishek Singh

India is preparing to host the Global Partnership on AI (GPI) summit in Delhi from 12th to 14th December. The summit aims to become the leading platform for AI, bringing together nations, stakeholders, industry, and academia to discuss and collaborate on AI-related challenges and opportunities. In addition to the main themes for GPI and working groups, the summit will feature an AI expo and an AI game changers competition for startups. The deadline for startups to enter the competition has been extended from 15th October to 15th November. India is also working to expand the membership base of GPI in order to include a broader range of perspectives. They are engaging with the Secretariat and member countries to determine how GPI will be expanded. India has made significant progress in integrating AI into digital public infrastructure projects, such as the identity platform, digital payments ecosystem, and document exchange platform. They believe that AI can enhance the value and effectiveness of these projects, addressing challenges in areas like healthcare and agriculture. Collaboration is crucial for regulating AI and ensuring its fair and widespread application. India is collaborating with other nations and experts to create frameworks and guidelines for responsible AI use, addressing ethics, data governance, and other important issues. Gender bias exists in AI algorithms due to biases in input data, but efforts are being made to encourage more women to participate in AI skilling programs and balance representation. India recognizes the significance of collaboration in AI and is introducing a collaborative AI theme for the 2024 presidency, exploring shared compute infrastructure and datasets for research. While the GEP summit is primarily open to existing member countries, non-members are encouraged to participate in side events and exhibitions. India is willing to showcase their digital public infrastructure projects and AI developments to visiting countries. The country believes in sharing advancements and promoting international collaboration in the digital space. Overall, the GPI summit presents an opportunity to come together, collaborate, and shape the future of AI, with a focus on responsible and ethical development and deployment.

Session transcript

Inma Martinez:
Yes. Welcome, everyone, to Session 111, the Global Partnership on Artificial Intelligence, a multi-stakeholder initiative on trustworthy AI. My name is Inma Martinez. I’m going to moderate this session. I’m also participating as one of the panelists because of my role as chair of the multi-stakeholder expert group. I will make the introductions of the members of this panel. To my right, my colleague, Yoichi Iida, who is deputy director for the G7 and G20 relations at the Ministry of Internal Affairs and Communications at the government of Japan and who is also the co-chair of the Global Partnership on AI Steering Committee. And online, we have my colleague, Alang Paik, who is head of the GPAC Secretariat hosted at the OECD. And from India, we have the CEO of Digital India Corporation and India AI, Mr. Abhishek Singh, at the Ministry of Electronics and IT of the government of India. And we’re also expecting a fourth member of our panel, who is Ms. Kavita Bhatia, who is group coordinator of Emerging Technologies Division at the Ministry of Electronics and IT at the government of India. And the order of the session intends to provide you with a scope of what GPAI is as a multi-stakeholder initiative of international scope and running now in its fourth year. And each of the members of the panel in their capacity as co-chairs and as organizers of the next presidency will reflect on how GPAC is delivering value to the world, to the member countries, and what we hope for the future. And I would like to invite our colleague, Alan Paic, who is the head of the Secretariat at the OECD, who is online to start the presentation with an overview of GPAI as an organization. Alan, the floor is yours.

Alan Paic:
Thank you very much, Inma. And it is my pleasure to address you today. I will give an introduction to GPAI as a multi stakeholder initiative on trustworthy AI. So, GPAI is this long term initiative, which is specifically dedicated to AI related priorities. And it has a multi stakeholder focus to convene experts from a wide range of sectors. And the mission of GPAI is really to bring both countries on one side and experts from different stakeholder groups together to support and guide the responsible adoption of AI. And we know that especially in 2023, this becomes of very high relevance to ground, um, the adoption of AI in human rights, inclusion, diversity, gender equality, innovation, economic growth and environmental and societal benefit and seek to contribute concretely to the 2030 agenda and the UN sustainable development goals. This has been GPAI’s mission since its creation in 2020. And as I said, in this year, it became more and more evident how important this is, and how big the risks are, in, of course, in parallel to the huge opportunity, which is, which is given by the development of the technology. So we do have a global and inclusive membership, which is open to emerging and developing countries as well as developed countries. And this membership is informed by the shared principles which are reflected reflected in the OECD recommendation on AI, which is today also widespread. G 20 has also adopted a similar set of recommendation inspired by the OECD recommendations. So the GPAI members today, we number 29 members. And as I have mentioned, we do have an increasing number from emerging and developing countries, including Argentina, Brazil, India, Mexico, Senegal, Turkey, and Serbia, for example. But we also do have, as you see, most of the major leaders in in AI technology on board as GPAI members from the government side. Now, how does GPAI function, we do have a very elaborate governance, we have a ministerial council and an executive council, which are representative of the member countries, then we do have a steering committee, which is the place where the different stakeholders meet. So where we do have representatives of both the member countries and the experts. And then we have the multi stakeholder experts group of which in my is the chair. And this experts group accounts for expert working groups, and currently two expert support centers in Paris, and Montreal. And their work is also supported or will be supported in the future by the work of national institutes. So institutes which will contribute to the concrete projects which GPAI is putting forward. Now the GPAI council, as I mentioned, has two parts, it has the ministerial council, which meets once per year, and our next meeting is in New Delhi and and our colleague from India, Mr Abhishek Singh, will talk about this forthcoming summit, which is very important. The Executive Council meets at the working level three times per year and gives guidance to the GPI Secretariat on internal processes, on project financing, work plan, and approves the GPAI budget. So the Council this year, the lead chair is Japan. The incoming chair is India, who will become the lead chair for the next year. And the outgoing chair is France, who has been the lead chair last year. The Steering Committee, as I mentioned, is the place where all the stakeholders meet in this multi-stakeholder initiative. So we do have in the Steering Committee six representatives of the government and six representative of the experts. Within the representatives of governments, we have three representatives, which are the co-chairs of the initiative. We have two additional government representatives appointed by the Executive Council. And then we have a specific seat, which is reserved for an emerging or developing country, which is also appointed by the Executive Council. And this shows the commitment which GPI has to support membership from such countries. And then Steering Committee also does meet sometimes in the extended format, where all the GPI members are invited to participate in the deliberations. I think one important point is the project funding. How are the projects funded at GPI? So we do have the baseline budget envelope for projects, which has been historically provided by France and Canada, who were at the origin, the founding members of GPI. This is going to be complemented now by a mechanism through GPI pooled seat funding, where all the countries will be able to contribute. Then the second part of the project funding comes from in-kind contributions through National AI Institutes, who can contribute in-kind computing power, data resources, human resources, et cetera, into GPI projects. And then finally, we do have also partnerships, and partnerships can be from both in-kind and financial contributions to specific projects in the GPI work plan. I would like to mention also the GPI webpage, which I would encourage you to visit. You will find a lot of exciting information there. You will find our new reports. We have two new reports which are featured on our webpage, which are recent and very actual. One is about generative AI jobs and policy response, and the other one is about AI foundation models and detection mechanisms, saying that whatever new foundation model is put out there needs to provide a detection mechanism which would help us identify that that text has actually been produced by an AI. You will find information about the GPI Summit. At the GMI Summit in New Delhi, we will be launching many more new reports which are just being finalized right now. So watch this space for the new exciting results from the GPI experts. And also, you will find information about the G7 commitments to advancing generative AI policy where this collaboration does include GPI. And finally, you will find information about the global challenge to build trust in the age of generative AI. We know how fragile trust is today about the proliferation of fake news and big threats to our democracies and so on. So we do want to have this global challenge. This is a big initiative which is launched in collaboration with the OECD, with UNESCO, with AI Commons, IEEE, and VDE. And we are actually right now in a call for partners. So a call to all of you who are listening in today, if you are interested to collaborate on this very exciting initiative, please do click on this partner inquiry form just very briefly to explain what this is about. It is in three phases. In the first phase, we do want to identify promising ideas, how to have policy and technology solutions for building trust in the age of generative AI. AI. Those ideas which do show potential will get some resources to build a prototype in the second cycle, and the successful prototype then will be encouraged to pilot and scale in the third cycle. So it is a very exciting new transversal initiative with global partners. Please feel free to apply and partner with us. Thank you, Inma, and over to you.

Inma Martinez:
Thank you so much, Alan. I’d like to move now to a very important aspect of what the GPAE is, which is our mission and our vision. And I would like to invite my colleague Joichi Ida to present to us GPAE’s mission and vision and really what the Executive Council has done to address the emerging challenges that AI is presenting, with the dynamism that is required during the presidency of Japan, as well as how Japan in this year has been steering the GPAE mission in a very, very tough year and conflicting year with lots of work to do, especially from the G7 perspective of having to come up with a Hiroshima process. Thank you, Joichi. The floor is yours.

Yoichi Iida:
Okay, thank you very much, Inma, for your very kind introduction. And good morning, good afternoon, good evening, everybody. My name is Joichi Ida, actually formerly Deputy Director General for G7 and G20. At this moment, I’m working as Assistant Vice Minister at the Ministry of Internal Affairs and Communications. And my work is mainly looking at the multilateral digital policymaking, including GPAE, OECD, G7, G20, and also IGF altogether. And this is a very, very busy year for us. But at the same time, we are very happy to see different frameworks being synergized with each other, not only around AI but also other digital policymaking including the data flow or infrastructure and whatever. So with regard to the GPay operation, we took the lead chair position in the late month of last year and we are working at the lead chair for 2023. GPay has a very unique structure, not only in the organizational structure as represented by Alan in his presentation, but also in the process structure. The lead chair country hosts its summit meeting in the beginning of, on the very first day of the chair tenure, so it happened late November last year and on that very day we succeeded the lead chair position from France and we put our effort to succeed the successful GPay’s work and also even promote further its very important responsibility in promoting global AI ecosystem. Actually the GPay executive council, all member countries are working together on how to promote the responsible deployment of AI technology into the society through four different working group topics which are responsible AI and data governance and the future of work and the commercialization and the innovation through AI technology in the society. So we, the government members are closely working together with the private sector experts to promote those projects in four categories and we as a lead chair, Japan wanted to promote the uniqueness of GPA and also to improve the weakness of GPA. So in the beginning of our chair tenure, we recognized through our discussion in previous years some of the challenges for GPA would be first how to strengthen the messaging or delivering our message to rest of the world and in order to achieve that we decided to elaborate the first minister’s declaration to be delivered by the ministers at GPA summit last year. That is one instrument and secondly we wanted to add the very temporary topic for AI development as AI for resilient society. So we added AI for resilient society for priority topics for GPA activities and then we also wanted to strengthen the opportunity for experts to bring their message to rest of the world through the lot of events associated to the GPA ministers summit in the same venue in Tokyo. So we had more than 20 side events where many experts presented their own views and also do some exchanges on how we could work together to promote responsible deployment of AI technologies in our society. These are the three major focuses where our lead chair presidency worked on and I hope these three emphasis contributed to some extent to the development of GPEI work this year. And we also wanted to create a stronger synergy between GPEI work and other international work streams. That is why we picked up the AI topic as part of the G7 agenda this year and we discussed AI governance, global AI governance, as part of our working group discussion and we agreed on further work to build up the global governance policy for generative AI and in order to do that we have agreed that we should work closely with international organizations including GPEI and OECD and UNESCO to understand better how generative AI would support the human society and what kind of risks and the challenges may come up and how we could address those potential risks and the challenges through the collaboration with international organizations and experts working there. So I hope I stop there but these are the efforts by Japan as lead chair presidency and I hope these efforts will be succeeded by India to lead the chair next year and of course we, Japan, continue to make a contribution to GPEI work next year and beyond. Thank you very much.

Inma Martinez:
Thank you very much, Yoichi. As co-chair of the steering committee and colleague of Yoichi throughout this year, I have to say that the Tokyo summit was really probably one of the best events I’ve ever attended. I think as a scientist myself I felt that the sessions, not just the ones that the members of the multi-stakeholder expert group presented but also the peripheral sessions from academic and for industry that came to the summit, were truly exciting and really to the point of the times because AI these days grows exponentially, not linearly, and everybody has to react in ways that are much faster than before and with better solutions and forward thinking. So thanks very much, very grateful for all the work that Japan has done for GPEI this year and I’d like to proceed and present the other pillar of GPEI, which is the multi-stakeholder expert group, which is the group I joined in 2021 when the government of Spain, one of the members, nominated me and I entered one of the working groups which was innovation and commercialization of artificial intelligence for small to medium enterprises. And last year during our elections I then took the chair of chair. This is a very singular community because it’s the first time that the AI community of not just academics but also industry, AI scientists, lawyers, civil servants, organizations working on ethics, trade unions, have come together to work together on very very specific initiatives and this puts us in a position where we serve at the pleasure of the members but at the same time the members gives us the flexibility to proposed approaches. How would we as scientists, as AI people, would deal with some of these challenges? And this synergy is what makes the GPEI truly special and a first for many governments where it comes to AI advisory. And the makeup and the fabric of the MEC is quite varied. More than half of the expert group is composed of AI scientists, true AI scientists, and then we are complemented with other people from trade unions, for civil society, from industry that also have long careers in artificial intelligence. I would say the average age is above 40 for sure, if not more. And because the membership at present is with a huge component of European countries, well the countries, the members nominate experts, that’s why we have so many European experts in the group. But we are also expanding to bring experts not just from the membership but when we work on projects that require specific skills, we invite other members of the AI community to work with us as specialists. And this is one of the keenest efforts that we have for these years to come, which is to bring the voices of developing nations, of emerging markets, of other scientists that do incredibly valuable work that complements our own work. The gender gap is not too bad, I would say. We are making huge inroads in stabilizing this 15 percent difference because one of the points of governance that we have in the in the MEG, as we call it, is to ensure representation and not just between the genders but also geographies. Every working group has two co-chairs and we try to bring people from different continents. And what is it that we do with lots and lots of calls on video platforms because all of the experts are based in their home countries. And we are supported by the two expert support centers which are CEMIA in Montreal and INRIA in France. And this is probably one of the best elements of, you know, the governance of J-PAY that is decentralized and at the same time each member, in this case France and Canada, puts forth the centers of innovation and research to our advantage, to our support. And we organize summits. We have done summits online in 2020. We did it for the first time in person in Paris in 2021. We did it in Tokyo last year and we very much look forward to the summit in India this year. And as my colleague Alan mentioned, we engage with other organizations in common goals and projects like the Global Challenge which Alan explained earlier. Now why J-PAY engaged the AI community has a very simple answer. Between 2015 and 2018 the advancements in artificial intelligence grew exponentially. We had huge advances in neural networks, in the transformer modeling within language models. We brought computer vision, AI-driven drug therapy development, level 5 automation in cars. It was a huge movement of advancement in AI that really put a new perspective to almost 70 years of artificial intelligence. And that’s why the original founding members decided we need an association, an international initiative, that will be able to understand this massive exponential growth and transformation and governments to really get on a roadmap of dealing with it with innovative approaches. And J-PAY, because it’s an initiative, has a governance that is very singular. So we have a federated governance. Every country puts their support from their local leading institutions. And every member of the J-PAY from a country perspective brings the individual and collective leadership, exactly what we do in the expert group. And we look for multi-stakeholder equity, because we know that AI has to be an AI for all. And this is why not just the council brings new members from all geographies of the world, but the experts do the same. And one of the things that differentiates our projects with other projects around artificial intelligence in the world is that our mandate, how we have been mandated by the council, is come up with solutions, come up with real actionable solutions that go beyond policy. Yes, you can advise on policy, of course you do, but bring solutions that we can implement, that we can roll out in our global markets, and also find standards for all of us to agree upon. So the way that we understood that mandate, especially in 2023, when the emergence of generative AI really brought a new perspective and enormous challenges to society and governments, was bringing the members and the experts together. So this is something very singular. We created a town hall in May, in which the experts invited all of the member countries to attend. And we explained how we, the AI community, understood the risks, but also the opportunities for language models, foundation models, generative AI in general. And it was a town hall format. Anybody could ask anything. And we made it very free, free flow, so that for the first time it was a conversation, it was interactive. In September, we hosted the first innovation workshop for members and their delegations and our experts in CEMIA at Montreal. that you do in an innovation workshop was you challenged ideas, you address if everybody understands the same when we discuss, for example, risks. Are the risks for a member country in Europe the same as one in the Americas? Do we all prioritize the same challenges in the same way? So it was bringing the existing roadmap of challenges, risks and projects that the Council and the GPA expert group had, but we put it through the lens of are we all on the same page? Have things changed that we need to modify some of these assumptions and hypotheses? So we really behaved as artificial intelligence scientists and it was really successful because everybody felt that for the first time governments and experts were working together for two days in the same room, as you can see from the pictures, throwing ideas, challenging ideas and agreeing on approaches. And the way we operate is on a four-pillar structure. The big themes of artificial intelligence, as you all know, is to ensure that is done responsibly, that is trustworthy, that it carries the ethics of the future that we want for our people. We also concern ourselves with how the future of work will evolve with artificial intelligence coming into industry and society. And then the pillar of AI is data. So of course data governance is one of the most active working groups in the MEG and innovation and commercialization of AI. AI finally is becoming a product and a service, is coming into industry, is coming to the hands of people and we need to ensure that the service level agreements, that the human centricity by design really comes with it. As one of our experts mentioned, AI should come to us in a state that is already safe, that we don’t need to make it safe because it’s dangerous. We should really strive for an AI that comes to us in the best possible state. So how do we respond to the challenges that the member governments undergo on a monthly basis I would say? Well one of the big challenges was presented to us by the Hiroshima process in May. Together with the OECD we were called upon to support the vision that the G7 had as to how we needed to act quickly and steady and on a very solid ground with regards to generative AI, advanced AI. And immediately we looked around and we realized that we were already working on absolutely all of the points that were coming out of that mandate, out of that call. So as you can see from the list, obviously taking measures to ensure that the risks are met and addressed. We also need to mitigate vulnerabilities, you know, how it comes to market, what are the capabilities and the limitations that something coming to market in inappropriate ways would create. So the way we respond to all of these objectives is in a variety way. In some ways we produce ideas such as sandboxing for responsible AI. In others we look at what Alan mentioned, you know, detectors, real tech that actually addresses, you know, fake dissemination of misinformation etc. And as you can see from these columns, the elements of risk and concern listed by the G7, we already had projects operating in the different spheres of what needs to be done. And if you want to have a deeper view of what these typical projects are, for example one of the most exciting ones which actually has been presented in the EU Parliament and in fact is being incorporated into the amendments to the AI Act of February 2020 and also presented in the US Congress is, can we create detection mechanisms in order to ensure that when this type of AI is commercialized, it already comes with detection mechanisms that either people themselves can action and test, is this thing a fake news, but also the social media platforms. And this is real tech addressing a technological problem. It’s not a policy, it’s not a framework, it’s something very like an asset. And in the innovation and commercialization we have a project that has already entered beta which is a portal that has been launched in Singapore, in France, in Germany and in Poland as the beta testbeds in which small and medium enterprises of all sectors can consult which AI solutions are appropriate for some of their challenges, for some of their gaps. And not only that but who are the AI solutions providers at the local markets. So this is another asset that is put in the hands of industry. And one of the most exciting projects that is really addressing something incredibly hard to frame is, can we make IP out of artificial intelligence? And we started this project in 2022 and it had a fantastic year 23 because we then organized workshops in conjunction and with the support of other research institutes like Max Planck in Germany and also at Duke University in Washington DC. And it’s really addressing contract law. And contract law is very hard because the way contracts are drafted and drafted is an art and they have to have you know the proper address in the proper language and really provide guarantees. So when it comes to intellectual property the contract laws are expanding and I invite you to follow this project because in 2024 we will create an incubator. So if you work in IP law and your focus is artificial intelligence please contact us because we will be running this incubator in 24. And the next steps for other projects, for example this is the one that I lead because this is the one I created when I joined, really encompass all of the nations. Agriculture is one of the pillars of our civilization. Artificial intelligence is creating prosperity and new ways to ensure that arable land doesn’t decrease and water resources are preserved and that we really can feed 10 billion people in sustainable ways. And regulating AI as well. You know what is the landscape of AI regulation across the board? How each nation is dealing with their own AI regulation? Can we find standards? This is another exciting project. As well as the future of work. The future of work is very vital because there is much misunderstanding as to what AI brings to industry and society and there’s much fear as to perhaps being relegated to a secondary role as humanity and as workers. And this is one of their working groups that has the most activity. They have projects for 2024 in which they undergo projects with university students in South America. They will see the impact of generative AI in Espanol and they will try to get down to areas where we can learn from countries in development. As well as how the working conditions of employees and workers are changing with the rise of AI within their companies. This is just a picture. I hope that you can visit our website and get familiar with the rest of our projects. And like I said before we are a growing organization and the strength of the collective is because of what the individuals bring to us. And now it is my great pleasure to introduce to you our future presidency lead in JPEG which is the government of India. And for that we have with us our colleague in the steering committee Mr. Abhishek Singh. I believe he’s online from India. The floor is yours. Oh yes he’s there. I think I need help from the AV team in managing. I should not be the big screen now. Can you please give the floor to Mr. Abhishek who is online. I can see him and disconnect me. He’s on mute. Okay. I think Abhishek you are now able to speak. A little bit. Abhishek could you speak louder. We don’t hear much. No nothing comes through. Okay. How about now. Now you are. are. Thank you so much and for your patience. Thank you. Thank you. Thank you. I don’t know

Abhishek Singh:
what the glitch was, but anyways, but what I wanted to say is my colleague, Kavita Bhatia, is here. She will be making a presentation with regard to the summit that we’ll be hosting in December. So Kavita, can you share the slides and make the presentation? Yeah. Good afternoon,

Kavita Bhatia:
good morning, and good evening to all of you. I’ll just share my screen. Is the screen visible? Is the screen visible? Yes, yes. Yes, Kavita, it’s visible and you could switch on your camera also maybe. I don’t think I’ll be able to because this running from the one to… I’ll do it afterwards. So India’s vision for AI has been very important is that we understand that this technology brings a lot of focuses on emerging technologies, but we are wanting to ensure that the technology will bring social and economic growth, which will be in line with the inclusive development. In fact, our Honorable Prime Minister has been always saying that we need AI, we need to make AI in India and make AI work for India. And he also believes that the technology should be rooted with the principles of sabka saath sabka vikas, which means that benefit for everyone and working with everyone. So this is the basic vision of India for AI. In fact, we have a very simple approach because you all know that India is a very large country and we know that AI has the potential to improve the public service delivery by overcoming the administrative efficiency, by bringing more efficiency in the administrative procedures and keeping citizen in the focal point so that the services which we develop with the help of AI should be beneficial to the citizen. And also AI needs to overcome the traditional barrier of not having inclusivity. So we want to have it more inclusive for the development of large scale social transformation solutions. As I said that AI enables the policy developers so that they are able to take the right decision based on the data. So that the decisions taken for the development of social benefits should be meant for the citizen and he should be able to get whatever the benefit he’s entitled to. In fact, this is not what AI should stop at. It should also empower the citizen so that he knows what is his entitlement and what are the benefits which he’s supposed to get. And he should approach the government so that he’s not debarred or not having these benefits which he’s supposed to get. And the AI provides the innovative models for the governance so that we can have innovation for public’s good and create new economic opportunities. So this is the main focus and the approach which India is taking forward. And we already have come up with a strategy for AI which basically focuses on democratizing the access of AI resources. The AI resources which we mean is the streamlined access to good quality data sets for research and innovation, having the access to the compute and most important, having the skilled resources so that the innovation can be brought out into the system. So with this principles in our background, we have come up with a comprehensive program on AI which focuses on these three pillars as well as one of the most important pillar which we have kept in this program is the National Center on AI where we plan to implement 10 solutions across the country so that we can see the benefit what AI brings for the nation. Responsible AI is also one of the pillar as I said, which is very important because the solution should not cause harm to the human being. So we have already detailed out the principles of responsible AI, we have worked on the operationalization mechanism, and we have also gone ahead with one of the use case of using these principles in the facial recognition technology. As I said that India is a very large country, we have 22 languages which we speak and more, around 1000 plus dialects. So we understand that AI should bring the inclusivity and should also enable the citizens to get the services in the vernacular language. So we have already created a platform which is a multi-modal AI platform called Bhashani which is built to speech-to-speech machine translation system. In fact, in G20 we have showcased this solution and we have also added 10 international languages which has been showcased in G20. The other important aspect which we have worked on is the framework for the fairness assessment. Because the solutions which are brought to the market or which are taken for the implementation should be made sure that they are fair and should not have any biases attached to it. So we have come up with the framework and along with BIS we are also working on the other standards which are very important for developing a successful AI solution. Skilling, as I said that we have already made a note of it and we understand skilling is very important. So we are trying to cater to the skilling aspects at all the levels. One, at the very initial level when a child is in the school, we wanted to make him understand about what AI is so that we can demystify the harm which he has been told that the AI can bring. The second level is that we want to re-skill and up-skill our IT professionals so that they are up to the mark in the era of AI. So that they are able to tackle the problems which AI might bring about the job. losses. And the third area which we have tackled is the researchers. We understand that we need to have more researchers so that we can develop our own LLMs. So we have come up with a program where we are financially supporting the PhD students in the area of AI and emerging. So we are trying to cater to the skilling aspects in all the three levels so that not a single layer is being left out. With regard to the principles, with regard to the vision of AI, this year we are going to take the lead chair in December. However, we have started working for GPI as an incoming chair and we will be hosting our annual summit in December which has been talked about by Inma as well as Ellen. That December we are going to have a summit where we will make global policy makers to come and have more discussions on the responsible AI. So our main focus as an incoming chair this year has been to increase the member and the expert collaboration and in this regard we have already had convenings on three of the working groups which Inma has shown, data governance, innovation and commercialization, responsible AI, where we have brought the ecosystem of AI to understand what the GPI experts have been doing and we also wanted the industry to come and share their experiences and vision which they want from the GPI. So this we have already done and fourth convening we’ll be having shortly on future of work. The most important thing which we plan to do it in our presidency is that we wanted to make GPI as an independent identity, as a multilateral initiative so that the GPI can be the point of contact for all the AI related information or standards or frameworks as what WHO has been in. case of health. This is what we want to do in our presidency. And we also want to enhance our advocacy efforts so that we will bring more visibility of GPI outputs and also we would like the GPI work which they have done so that we can proliferate and adopt those work which the GPI has been doing in the last four years. And also one thing which we want to increase participating but we wanted to enhance this membership so that we can bring wide variety of experts which have a different national and regional views and experience so that we have a holistic view of AI in the world across. And last but not the least we want to promote the equitable access to the critical AI research and innovation which is compute, data, algorithms, software, experts and other related resources to the countries which don’t have the access to so that we have this equitable AI research and innovation access to everyone. So this is our vision for our presidency and we will be hosting a summit in December in India in New Delhi. With this I would like to thank all of you and will come out of my presentation. Thank you.

Abhishek Singh:
Thank you so much. Thank you Kavita. Thank you Kavita. Just a minute Inma. Thank you Kavita for laying down the vision and the plan that we have for the GPI summit. We are looking forward to hosting all of you in Delhi from the 12th to the 14th of December and as Kavita mentioned like apart from the focusing on the themes for the GPI and the working groups and getting all the stakeholders all the experts coming here to join it. We’re also having a few other add-ons to the summit in which like there will be AI expo in which we are getting startups from all across the world to come and show their AI based solutions that they are there. We’ll be having an AI game changers that we have launched. We have shared the information with all of you. are startups who are building any solutions related to AI, any dimension of AI. If they want to participate in that, the last date of it has been extended from 15th of October to 15th of November. We would like you to share it with the relevant stakeholder community with regard to the AI game changes. And we’d also have a lot of side events which will focus on various themes of AI. So if there are any member countries or any of the stakeholders who are represented at the IGF would want to contribute to the discussions in the side events at the GPA summit, we look forward to hearing from you, look forward to getting your involvement, your participation. Because the way the summit is being planned in India is that we want to make it like the go-to, just like the Internet Governance Forum is the go-to place for issues related to internet governance that we all do. And we all look forward to such events which are held annually. Similarly, the GPA as the prime body with regard to artificial intelligence will bring together all nations, all stakeholders, civil society, non-government organizations, industry, academia to this partnership. The GPA needs to evolve into that. We are working towards that. And we are also working with the Secretariat and member countries with regard to how the GPA will be expanded. So look forward to getting all your views and getting your participation in the GPA summit in December.

Inma Martinez:
Thank you so much, Abhishek and Kavita. Thank you so much for showing us what is to come in 24. I’d like to move to the discussion now. And I want to remind our audience that we will have 15 minutes of Q&A that you can use the chat or you can make the questions here in the room in person. But first, I’d like to start with Joichi Iida. You always talk about strengthening GPA. Can you give us some highlights as to why you firmly believe in that and what it is to come in that respect?

Yoichi Iida:
OK, thank you very much for the question. And yes, we strongly believe that the strength and uniqueness of GPA exist in the expert level. structure and multi-stakeholderism is the center of the value. So ideally the government and other stakeholders should support the private sector experts who are working in the project-based activities through working group works and their work is now supported by two expert support centers located in Montreal and in Paris. So in order to strengthen the GPA’s value and function we would like to strengthen the support system to expert activities and that is why we are proposing to add expert support center in Tokyo. So we have proposed this proposal to establish and add new expert support center in Tokyo at executive council this year and we believe the proposal was welcomed generally and of course we need to go through the discussion at steering committee and ministerial council but once it is approved we would like to operationalize the concept of third expert support center. In order to do that we at the same time need to prepare on our side to bring some financial and personal resources to manage the center and we are now internally discussing across the government how we could do that. Of course we need as government to bring this in action. And that is one of the ideas we want to realize, and also we are trying to promote the visibility and awareness among people on GPAE’s activity, and we hope this will be promoted through the closer collaboration between GPAE and the Hiroshima AI process, where we are discussing how we could promote and materialize project-based activities to accumulate some evidences on what kind of measures and practices might work to address some of the potential risks and challenges brought by generative AI and the foundation models. And also we may do some project to understand better how we could responsibly deploy generative AI and the foundation model into the society. And these topics and projects can be implemented through the newly established expert support center in Tokyo. That is one of the ideas we are now trying to promote, and I hope this will contribute to the higher scaling up of GPAE function. Thank you very much.

Inma Martinez:
Thank you. Thank you so much, Joichi. I’m really delighted to hear this because, as I mentioned earlier, the strength of GPAE resides in the multi-stakeholder equity and the decentralized and federated governance. And nothing would delight me more than having an expert group in Tokyo. I’d like to ask something to Abhishek Singh. that is also super exciting because we are all looking forward to having India in the president’s seat, in the lead seat. And I would like to ask him, every time that we have met and you have come over to the working groups and you have looked at the projects, what do you feel is probably the significant difference between our projects and your agenda as a CEO of the digital agency in India, under a very visionary mandate from your president?

Abhishek Singh:
Thank you, thank you Inma. I must straightaway mention that one key value that we get as being part of the GPA and getting to interact with the multi-stakeholder group, the Center for Excellence at Montreal and at Paris and interacting with experts who are all working on various projects, the four working groups that we have, is that we get a lot of insight with regard to what more can be done. As you rightly mentioned, India has been the leading country when it comes to implementation of digital public infrastructure projects. We’ve implemented certain digital India, as we call it, the digital India as a brand, but the digital project will be implemented at population scale, whether it’s the identity platform which has got more than a billion people registered, we have almost 70 billion. Authentication that happen on a daily basis or our digital payments ecosystem which is like one of the world’s most robust and most largest digital payments platform with more than 10 billion transactions that happen every month. So the numbers are huge or our document exchange platform which we call DigiLocker, which has more than 200 billion registered users. So in India, whatever we do is at a scale. But now as we move on and try to leverage artificial intelligence, we are seeing that a lot of value addition that happens when you kind of bring in a layer of AI to the digital transformation project that we have implemented. And when we do that, when we say, for example, use face recognition for authenticating people. We are using a very simple AI tool. But then all the issues related to ethics and responsible AI come in. When it leads to more and more adoption of AI, the future of work comes in as a very important thing. We have a large number of population, almost 50 million Indians are working in the IT and ITES sector. But the way AI is coming in, some of these jobs will be impacted. So we need to work with the global community. We need to work with the experts. We need to work with other nationalities with regard to especially coming up with framework, the guidelines for regulating AI, ensuring that innovation and regulation go hand in hand, ensuring that we are able to provide equitable access to AI, ensuring that we are able to democratize AI, and most importantly, bring in an era of explainable AI. So very often, these things cannot be done by only one nation alone or a few corporations alone. There is a significant concentration of the AI technology in a few companies and a few nations. But if we need to harness the benefits of this technology, we need to take it further. We need to ensure that there is access to compute, there are frameworks with regard to data governance, the frameworks with regard to leveraging the data that can be used for building AI models. And there are solutions that can be used for population scale societal problems. Like for example, how do we build AI solutions for solving healthcare issues? How do we use AI to detect tuberculosis or cancer? How do we use AI for helping farmers across the country? And when we do that, the real benefits of AI will come in. And the low and medium income countries are going to benefit a lot in doing that. So what we are doing is that integrating the artificial intelligence and advancement of field of AI with the digital public infrastructure project that we have implemented and work with the global community in order to fast forward that. And whatever we have learned, whatever we have built, or whatever we are building together, it becomes part of the global DPI repository. As the G20 declaration mentioned about building a global DPI repository, the AI solutions will also become part of the global. repository and many of these solutions that will be developed in world in cooperation with other member countries will become available for adoption and replication across the world. So that’s the value we see as being part of the GPA and the benefits that we get in engaging with the real experts, real technologists, real engineers and real social scientists who are working in this field.

Inma Martinez:
Thank you. Thank you so much. I completely second everything that you said and something very important that both of you have mentioned and especially in the themes of each presidency, resilient society, making empowering people to respond to challenges and then making AI equitable and accessible to all. This is the century of human centricity, putting people at the center of everything that we create so that we can really create a future for everyone. I’d like to open the floor to anyone online or present in the room to have the chance and the great opportunity to make any questions to Joichi and Abhishek or myself. So if we have any questions, please raise your hand and somebody will bring you a mic. Or let me just check anything happening online. Yes, we have one question over there. Can I? I’ll give you the mic myself. Please let us know who you are and your organization. There is switch. Great. How’s that? Perfect. Okay.

Audience:
So first I just want to say thank you so much for the great presentation. I found it really helpful. My name is Ed Teller. I’m from Amazon Web Services from the global AI policy team. I thought the slides were all great and the one which I thought was particularly interesting actually was seeing how GPA’s work streams were being mapped against the Hiroshima commitment. So I thought actually I just wanted to ask a couple of questions about that sort of firstly sort of how you see that work sort of going forward. Because you’re likely going to see reporting against those, the Hiroshima principles. So I think that’s a really helpful lens to understand the work. And also just in terms of like how you’re thinking about partnerships as well and collaboration against those work streams too. So yeah, those might be questions, but thanks very much for the really helpful presentation. Thank you.

Inma Martinez:
We have an ongoing interaction with the council. At the inception, the member countries exposed their concerns, the challenges that they felt that they needed to address. And then we looked at those concerns and challenges and from our perspective of engaged scientists, because we all work back home at our universities, labs, industry. And we decided, okay, this is how we would address that issue. And we would propose… projects and things to develop and then the council funds it and we set ourselves to deliver the specific initiative, the specific projects. Some of them are ongoing, some others were completed within a period of two years and all of this is shared back to the member countries and also publicly because all of these reports as to how we are progressing are hang on our website and our approach to the projects is we cannot be the only ones working on these. So as Yoichi rightly said, the strength of JPEI is that once we get our mandate, we look at the world and we ask ourselves, is there an expert somewhere that we should invite to work with us in this project and those are the specialists. So for example in my agriculture project, immediately I reached out to NARO in Japan who is the agency that looks at all the technification of agriculture and the director-general himself is my specialist and he comes to the meetings every two weeks and he has brought information, insights, thoughts, strategies and that’s how we work and this is how the community really brings real insights because it comes directly from the places where AI is being created and we are expanding and now we’re moving into IP law and we move to civil society and ethical societies and companies that think about service-led innovation when it comes to putting products in the world. What are the principles that you are guided by when you create a product to be safe and that’s really the uniqueness of this initiative which truly is unique because there’s nothing like it and we hope to strengthen it with the support of our member countries and their leadership and their own research institutes and innovation centers. That’s why it’s decentralized and federated. Thanks so much for the question. Thank you. We have another one here. Let me just pass you the mic.

Audience:
Good afternoon. Good morning. My name is Paola Galvez. I’m Peruvian, right now based in Paris. I just finished my graduate internship at the OECD and my master’s in Oxford. Former advisor to the Secretary of Data Transformation in Peru where I helped over, actually I oversaw the design of the AI national strategy and I have one question because personally I’m a gender advocate in whatever I do. So I saw in the chapter or initiative that you have in Responsible AI there is an activity number five creating diversity and gender equality in AI. If you please could explain or expand a bit on it. Also ask India the future. chairs, because I saw their principles and it was really fantastic to hear their optimism and how they want to position GPA as the institution or multilateral initiative to really come to when we need expertise. And if you will be open to develop an initiative that works in bridging the gender gap in AI that we have in the world. And my second question would be how, coming from Peru, I am happy to see many developing countries, but maybe what are the requirements to have other countries as members? Speaking of Peru, that we are the first one to have a law on AI for social purposes. I think that would be something that our prime minister would be interested to, but I would like to know what are the requirements so that I can come to them and tell them how incredible initiatives you guys have. Thank you.

Inma Martinez:
So the gender question is for me or for Abhishek?

Abhishek Singh:
I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, thank you. In fact, I really thank, I didn’t get your name, the lady from Peru for the question, but a very rightful and very impactful issue because gender has been a issue, especially with regard to AI algorithms and the biases in AI, whichever way we work in. And that’s primarily because of the bias that exists in the input data. And if the data is not equitable, in the sense that if data is biased, like for example, very typical examples are given that AI models would treat engineers to be men and teachers to be women. So these biases, if we are aware of, that can be resolved to some extent. This is a very basic level. The other biases that exist when it comes to gender and AI. like very often in India, what we’ve seen is that even in AI scaling, the number of people who undergo AI scaling and AI training, very often because of societal biases, it’s the men and the boys who take the dominant share of because of access to devices, because they have access to higher education. So all those biases come in. So we have a conscious plan within India that whenever we do talk of digital training and digital scaling, we try to balance it out and we try to have proactive measures in order to encourage more women to take up courses and take up AI-based scaling, so that ensures that we have a fair balance of that. Whichever data sets are used, how do we mitigate the biases which are there in order to ensure that AI is gender neutral and AI is more equitable for the whole population? So those are some measures which are taken, but yes, it will take a lot of time to train the models in order to ensure that they’re aware of the biases and how to get rid of them. And that’s the work that’s part of the ethical framework and the responsible assessments of AI solutions, wherein we address the biases coming in for gender or biases coming for race, or biases coming for other diversity that exists in the world. So that’s on the gender issue. And the collaborative part is, again, something which is very, very useful. And one of the themes which we have introduced for the 2024 presidency is with regard to collaborative AI for building partnerships amongst various stakeholders. Like, how do we join hands? How do we share knowledge, share experience, share models, and together work on building big solutions? In fact, Alan from the Secretariat very often talks about building a CERN for AI. Just like world community has come together to work for particle physics and advancement in the CERN Center, can we think of having a shared compute infrastructure? Can we think of having shared datasets on which research could be done? Can we think of sharing insights or sharing partnerships between AI research? from across institutions. So that would really, really ensure that we work together collaboratively in the field of AI and develop AI for the betterment of humanity, rather than just being always being aware of the biases or the bad things that can come from artificial intelligence.

Inma Martinez:
Thank you. I feel the second question about further countries joining. Right, maybe. Maybe Alan. Okay. Hello.

Alan Paic:
Yes, it was not about further countries joining. Well, I can also mention that. So we do have a membership process, which is well-defined and also described on the website. Further countries are invited to apply. Right now, the intake for 23 has been closed. And there will be a next opening in 24. So watch the GPI website. Everything is explained there. The deadline is around June or July, and countries are expected to present a letter of intent with their motivation to join GPI as an initiative committed to trustworthy AI. So that is the possibility for membership. And I would also like to react to what Mr. Singh just mentioned. Yes, we have been talking about the future perspectives of GPI. You know, GPI has achieved a very significant impact and Inma has mentioned previously, for instance, the detection mechanisms, the obligation by, you know, companies putting on the market foundation models to actually provide the detection mechanisms, which would allow us to understand that something has been produced by that foundation model. That is something which is very significant and has already been taken on board in some countries’ legislations. And I think GPI going forward wants to provide more and more impact. And I’m really happy to hear that India has this vision for the new year and new presidency to lead the way of GPI towards, you know, pulling together the resources. Basically, the idea comes from the understanding that today a lot of the R&D is concentrated in the hands of a few. large private companies, and that the public sector research is far behind and has a very limited understanding of the new technological advances, and the spending from the public sector is fragmented among different countries. So the idea, as Mr. Singh mentioned, of GPI as a go-to place, as a place where all the countries come together and pool together their resources in data and in compute power, and also with perhaps some other international networks which already exist, such as the worldwide LHC computing grid for particle physics, this could bring GPI even further on this ambition. So we do, of course, want to partner with private companies, that is great, but we do also want to bring together public research institutions, and this is the model of the national AI institutes which I mentioned in my introductory speech. Thank you.

Audience:
Thank you so much. I think we have a question here. Thank you very much. My name is Juraj Corba, I’m representing the government of Slovakia from the EU. I have two questions, if I may. In your presentation, which was very helpful and very informative, thank you for that, you have mentioned that you are planning some kind of activities for the support of regulatory activities. Could you please be more specific? What are you planning? So that’s the first question. And the second question relates to the India summit, so it may be a question to our friends from India. My question would be, you have mentioned that you are planning to create or manage the summit in a way that it is the place where we go and come for the AI topic. Slovakia is a non-member of GPI, we are considering a possible membership, but my question is how open will you be to participation of non-member countries, and how they can effectively participate at the India summit? Thank you very much.

Inma Martinez:
So I will answer the first question, and I believe the second is for Mr. Singh. So these workshops, what I mentioned regarding the contract laws, regarding AIIP, are planned to take place in incubator style, and if you go online and you look for this project, you’ll see the person that is leading this, which is Lee Tidrich, which is a professor at Duke University, and she will then be able to share the schedules, because as I mentioned, the projects seek specialists, and the specialists are invited to work with us from any country. So if there are projects in which you feel that some of your AI scientists and practitioners would really like to participate, we would like to invite them. to contribute, the projects are open for collaboration. And then I believe the second question was for India, for Mr. Singh, for Abhishek. Can government delegations attend the summit in India?

Abhishek Singh:
Yeah, thank you, thank you. I got the question, sir. So thank you for the questions, Slovakia, and for the interest in the summit. So while the way the GEP is constituted, the ministerial and the various official engagements which will happen on the 13th and the 14th will only be open to the existing member countries. And of course, the membership application, I don’t know how quickly we can work about approving the membership, but yes, there are the side events on the 12th, and there will be an exhibition which will be open to all guests, all members. So if you’re wanting to come and visit, you can write to us and we’ll work out the modalities and the sessions in which you’ll be able to participate as a non-member. So that we are willing to look into it, and we would also be showcasing, there’s a lot of interest in our prime minister himself said that if there are other countries who are coming, then we can showcase them our digital public infrastructure projects, how other parts work, how we have done in AI, and the side events and all in which we could participate. But yes, the ministerial and the steering committee and the executive committee, those formal events of GPA, they will be open only for the member countries.

Inma Martinez:
Okay, any other questions? Yeah, we have.

Audience:
It’s just a continuation to his second question. And I just introduced myself. I’m Kamesh Shekhar, I’m from India. And yeah, it’s a proud moment for us that we will be having the summit very soon. I’m from the think tank called the dialogue. We are based out of Delhi. And I just had a very follow up question to his is that like, and so it had actually mentioned. that there will be side events and other opportunities. I just wanted to understand like as a think tank and as a civil society, how can we also take part in the summit and like what are the way forward that we could watch out for as the summit like comes into picture. So that’s just a question I had, yeah.

Abhishek Singh:
So the details of side events will be up on the website very soon, hopefully by next week or so. And we would welcome registrations for the side events from non-members also. So, and there are a lot of think tanks who are already wanting to take part. There’s a lot of interest from the industry. There’s a lot of interest from the startup community. Within India, we had a big meeting yesterday in which more than 50 people had participated and they all have given various ideas with regard to what all we should be covering, especially with regard to building consensus with regard to key issues that the world faces, especially for the advancement of artificial intelligence and other technologies. So we look forward for that. We have been getting very good response from all stakeholders, especially the G20 countries and the countries beyond for our initiatives. In fact, I would like to just mention that even in the G20, as part of the Digital Economy Working Group, we, based on the requests that we got from multiple countries, we hosted a global DPI summit in Pune in June. And that included a lot of countries outside the G20. So almost 50 countries took part who were not the members of G20 because they wanted to know about what all we have been doing in the digital space. And eight of those countries already signed MOUs with us for replication of some of the India-stacked solutions in their countries. And none of them were the G20 members. So similar approach will be here. So the official meetings would be open only for the members, but the non-official, the exhibition, the side events, the keynote talks, we are trying to get some keynote AI scientists and researchers who can come and deliver a keynote talk. So those will be available for people who are not officially a member of GP also.

Inma Martinez:
Okay, and we have a question right next to you.

Audience:
Thanks very much. My name is Ben-Gurion Jones, I advise the Western Institute Foundation for Democracy and the body on issues around technology and elections. So I’m working on 10, 15 elections. I think the mic is not on. The mic is not working, is it? Can’t hear. There we go. I’ll start again. My name is Ben Graham-Jones. I advise the Westminster Foundation for Democracy, a UK public body around democracy and elections, but especially on issues pertaining to technology. I understand that you’re a big principle of GPA as well as that it’s very much guided by the shared democratic principles of its members and I’d be keen to know you’re moving forward into the Indian presidency, whether there’s also plans to address new issues around AI in the election processes, both the challenges and the opportunities moving into the year ahead.

Inma Martinez:
Well, from the MEXT perspective, we know that one of the major bruising points of generative AI and AI that has been misused is the risks to democratic countries, the risks to the democracy in the world. So these cascades into various projects, not just the one, because we believe that the pillars of the world is democracy, a welfare society that looks after its people and ensures their well-being. So if you want me to look through all the other projects after the meeting and tell you which ones, the theme is running across various projects from responsible AI to data governance. For example, data governance in 2022 had a specific project on human rights and that obviously comes from, you know, non-democratic situations. So I can talk to you after the session. Are there any other questions? I think we have reached minus one minute and I would like to thank Joichi Iida and Mr. Abhishek Singh and Kavita Bhatia and our colleague in Paris at the OECD, Alan Paik, for convening and being with us and presenting our vision, our hopes for the future and the singularity of this initiative, which many times when people ask me what the GPA is, I always say when you wonder if governments care about people, this is one of those. They truly do, they truly do, and they do their best to really take the reins of our future and make sure that AI is for opportunity and for good and for welfare. And let’s hope that we can achieve that. Thank you all for coming to this session. Many thanks to those who connected online, and I declare the session finished. Thanks very much for your questions. Thank you, thank you very much. Thank you.

Abhishek Singh

Speech speed

213 words per minute

Speech length

2545 words

Speech time

717 secs

Alan Paic

Speech speed

132 words per minute

Speech length

1792 words

Speech time

815 secs

Audience

Speech speed

169 words per minute

Speech length

967 words

Speech time

344 secs

Inma Martinez

Speech speed

137 words per minute

Speech length

4554 words

Speech time

2001 secs

Kavita Bhatia

Speech speed

167 words per minute

Speech length

1520 words

Speech time

548 secs

Yoichi Iida

Speech speed

95 words per minute

Speech length

1199 words

Speech time

761 secs

Fake or advert: between disinformation and digital marketing | IGF 2023 Networking Session #171

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Heloisa Massaro

The commercial marketing industry has always been a significant source of funding for newspapers and has a critical influence in shaping the information environment. Understanding how programmatic ads work and how they finance online ad campaigns is crucial for making informed choices and structures for online advertisements.

In Brazil, workshops were conducted with digital marketing actors, highlighting the necessity of integrating robust risk analysis into marketing and advertising content creation. This is aimed at tackling the risks associated with disinformation and hate speech. By embedding risk analysis, marketing campaigns and advertisements can be developed with the necessary precautions to counteract disinformation.

Heloisa Massaro advocates for the development of best practices and guidelines for the advertising industry to mitigate potential negative effects on the information environment. The Internet Lab conducted a project called “Desinfo,” initiating a dialogue on best practices and guidelines in the advertising industry.

The influence of digital influencers in politics is seen as a problem due to the difficulty in separating their work from their political marketing roles. This raises concerns about the transparency and credibility of the information disseminated by digital influencers.

Self-regulatory bodies play a crucial role in addressing disinformation in advertising. Discussions are held regarding measures to mitigate risks through self-regulation, promoting responsible advertising practices.

Exploring regulatory approaches is also important in handling disinformation in advertising. Mention is made of a platform regulatory build that tackles fake news in Brazil. These regulatory approaches aim to create a more accountable and transparent environment in the advertising industry.

To summarize, the commercial marketing industry significantly influences the information landscape. Understanding programmatic ads, integrating risk analysis, and developing best practices and guidelines are essential in addressing disinformation and ensuring responsible advertising practices. It is important to address the influence of digital influencers and explore regulatory approaches to mitigate potential negative effects on the information environment.

Audience

Political advertising plays a significant role in modern political systems, but it is a complex and problematic issue. This form of advertising has the potential to be weaponised and has frequently been used for data targeting, as highlighted by the Cambridge Analytica scandal. The misuse of data for political purposes poses a serious challenge to the integrity of elections and democratic processes.

It is argued that the role of political advertising needs better management and interventions to address these challenges. Election observation groups, such as the National Democratic Institute (NDI), engage in monitoring political advertising to ensure transparency and fairness. However, the Cambridge Analytica incident has underscored the need for stronger measures to regulate the use of data in political campaigns.

The involvement of digital influencers in political advertising further complicates the situation. There is a difficulty in distinguishing their actions as independent content creators from their role as political marketers. This blurring of lines makes it challenging to discern the extent of influence they have over public opinion and the potential impact on political campaigns.

To mitigate the risks associated with political advertising, it is argued that regulation should be developed to observe how advertisements contribute to disinformation in political campaigns. The dissemination of false or misleading information poses a serious threat to the integrity of elections and public trust. The difficulty lies in distinguishing between political content and other types of content circulating on the internet, which requires careful monitoring and regulation.

In Brazil, there is a self-regulating council for government advertisements. This council, overseen by the Brazilian Internet Steering Committee and advised by technical expert Juliana, aims to ensure that government advertisements adhere to ethical and legal standards. While the self-regulatory framework is in place, it is important to consider how measures to mitigate risks can interact with this framework and state regulations. The potential for regulatory capture within self-regulating councils and other complexities must be acknowledged and carefully addressed.

In conclusion, the role of political advertising in modern political systems necessitates better management and intervention. The weaponisation of political advertising, data targeting, challenges related to digital influencers, and the dissemination of disinformation all underscore the need for regulation and monitoring. As seen in Brazil, self-regulatory councils can play a role in ensuring ethical advertising practices, but it is crucial to consider the interactions between mitigation measures, self-regulatory frameworks, and state regulations. By addressing these concerns, steps can be taken towards fostering fair and transparent political campaigns and preserving the integrity of democratic processes.

Eliana Quiroz

An analysis of the role of marketing companies in the disinformation ecosystem reveals various perspectives. One viewpoint asserts that marketing companies are integral to the spread of disinformation. They excel in providing marketing strategies and facilitating effective micro-targeting, enabling the dissemination of misleading information. This complex ecosystem is formed by the involvement of multiple private companies in digital marketing and disinformation.

Contrarily, another perspective argues that the distinction between companies offering marketing services is blurred. This lack of clarity makes it challenging to define individual responsibilities in the disinformation ecosystem. For instance, Meta, a digital platform, provides marketing advice and services to influential clients, while newspaper companies in Peru act as intermediaries. This emphasises the need for a comprehensive understanding of the different actors involved to effectively combat disinformation.

The analysis also notes the impact of the Cambridge Analytica model on digital marketing and disinformation companies. This model, involving detailed data analysis and targeting strategies, serves as a reference for manipulating public opinion. However, its full implementation requires sufficient resources and interest. In cases of limited time or money, certain elements of the model may be utilised.

Having an understanding of country-specific marketing services is essential in addressing disinformation effectively. The analysis highlights the wide range of marketing services available in the global South, reflecting diverse resources. Additionally, journalists and influencers can play significant roles in the disinformation ecosystem. Therefore, a tailored approach is necessary to combat disinformation successfully.

Shifting focus to political advertising, the analysis underscores the importance of identifying the various actors involved to ensure transparency. The entities involved in political advertising include marketing companies, influencers, data providers, data analysts, media production companies, digital communication and public relations firms, and fact-checking and public opinion companies. A thorough understanding of this ecosystem is crucial for promoting transparency in political campaigns.

Regulation is suggested as a solution for promoting transparency and protecting human rights in political advertising. However, striking the right balance with freedom of expression is essential. It is recommended that regulation extend beyond digital platforms to include companies engaged in political advertising.

Lastly, the analysis highlights the significance of inclusivity and raising awareness of human rights frameworks among companies involved in political advertising. Some companies may not fully comprehend their role within the context of human rights. By fostering inclusion and promoting awareness, ethical implications associated with political advertising can be addressed.

In conclusion, a comprehensive understanding of the role of marketing companies in the disinformation ecosystem is crucial. The blurred boundaries between companies and the influence of models like Cambridge Analytica must be acknowledged. Tailored approaches, regulation, and a focus on human rights and inclusion are necessary to effectively combat disinformation and promote transparency in political advertising.

Anna Kompanek

The analysis explores the important role of the private sector, particularly local businesses, in addressing the issue of disinformation. It suggests that the definition of the private sector should be expanded beyond just big tech companies to include local business communities. These communities are both contributors to and victims of disinformation, making it crucial to involve them in tackling this problem.

The analysis highlights the need to sensitize companies about the potential ramifications of their advertising placements. It points out that companies may indirectly support disinformation through their advertising spending, with ads appearing on disreputable websites associated with disinformation. Therefore, companies must go beyond simply reaching audiences and consider the potential negative consequences of their ad placements.

The business community is seen as a key player in improving information spaces and combating disinformation. It is noted that a growing segment of companies is recognizing the dangers posed by disinformation. These companies can support independent journalism through ethical advertising and other means. By investing in healthier information spaces, businesses can contribute to creating a diverse and reliable range of information for the public.

The analysis underscores the need for global support and responsible business practices to foster healthier information spaces. The report by the Center for International Private Enterprise (CIPE) and the Center for International Media Assistance (CIMA) emphasizes ethical advertising as one way to support independent journalism. It suggests that responsible businesses have the power to promote and maintain healthy information spaces through their practices and collaborations.

Independent journalism is emphasized as being vital in combating disinformation. It is recognized for providing a diverse range of information to the public, countering the spread of false or misleading information. This underlines the importance of supporting independent journalism in efforts to tackle disinformation.

Furthermore, the analysis notes that local businesses can play a significant role in investing in healthy information insights and independent journalism. They can contribute through various strategies, such as ethical advertising, impact investment, blended finance, corporate philanthropy, and corporate social responsibility (CSR) initiatives. These initiatives enable local businesses to have a positive impact on information spaces and support the work of independent journalists.

Collaboration between government, civil society, and the private sector is identified as essential in addressing disinformation effectively. It is noted that the biggest danger lies in governments passing laws without consulting civil society and local private sector representatives. On the other hand, collaboration and dialogue can lead to more informed policies and effective measures against disinformation.

A noteworthy observation is the value of bringing local business organizations together as part of broader coalitions to secure the information space. In the Philippines, for example, the collaboration between the Philippine Association of National Advertising and the Makati Business Club was instrumental in discussing and addressing issues related to information security. By uniting local business organizations, effective measures can be taken to safeguard information spaces and combat disinformation.

In conclusion, the analysis underscores the crucial role of the private sector, particularly local businesses, in addressing disinformation. It promotes the inclusion of local businesses in efforts to combat disinformation and emphasizes the need for responsible advertising practices and support for independent journalism. Collaboration between government, civil society, and the private sector is crucial, and local business organizations can contribute to securing information spaces through broader coalitions. By working together, these stakeholders can foster healthier information environments and mitigate the negative impacts of disinformation.

Herman Wasserman

Disinformation has been a longstanding issue in the global south, with its roots tracing back to colonial periods. During this time, various forms of communication and propaganda were used to justify the subjugation of the colonised. In the post-colonial era, states in the global south have continued to control the media and engage in disinformation campaigns, aimed at limiting critical voices and maintaining their power.

The scholarly production around disinformation reached its peak in 2016, following elections in the United States, bringing increased attention to the issue. The advancement of new technologies has further amplified existing trends and forms of disinformation, posing a significant challenge to the global south.

The global south faces a dual threat to its information landscape, both externally and internally. Foreign influence operations draw on historical loyalties and presences in the region, while repressive states exploit the fight against “fake news” to enact laws that effectively criminalise dissent and restrict freedom of expression.

Another factor contributing to the proliferation of misinformation in the global south is misleading advertising and sensationalist journalism. These practices can promote false information and pose a challenge to the sustainability of small, independent media outlets which often rely on advertising for financial support. Economic downturns, in particular, can lead to cutbacks on advertising, further threatening the viability of local news outlets.

Despite these challenges, citizens in the global south are actively combating disinformation through various strategies. These strategies are often intertwined with other struggles, such as those for internet access, digital rights, media freedom and education. It is crucial to acknowledge the agency of individuals in the global south in the fight against disinformation.

In terms of political advertising regulations, South Africa currently faces a disconnect between the outdated regulations and the current social media climate. Regulations primarily focus on traditional broadcast channels and newspapers, failing to address the unconventional methods employed by political parties in the digital realm. As a result, there is a need to update and adapt regulations to match the evolving landscape of political advertising.

While formal regulation is an important aspect of controlling political advertisements, it is insufficient on its own. Public awareness and understanding of political communication play a pivotal role, along with fact-checking as a crucial part of political discourse. A coalition of journalists and civil society organisations is necessary to scrutinise political parties’ claims and ensure accuracy and transparency.

In conclusion, the issue of disinformation in the global south is multifaceted and complex. It stems from historical contexts and continues to be perpetuated by external influences and domestic repression. Misleading advertising and sensationalist journalism add further challenges to the region’s media landscape. However, the agency of citizens, along with updated regulations and collaborative efforts, can mitigate the effects of disinformation and uphold peace, justice and strong institutions in the global south.

Renata Mielli

The analysis provided reveals the detrimental consequences of false and misleading information being spread through the Internet and digital platforms. It argues that the Internet has allowed the dissemination of unreliable news and misleading content on a large scale, negatively affecting society. This widespread dissemination of false information has drawn attention to its harmful effects on society, as it undermines the credibility and reliability of information sources and can potentially manipulate public opinion.

The findings also highlight the role of digital platforms in amplifying and promoting misleading, false, and harmful content. It is noted that content with demonstrably false information circulates more widely than verified content, feeding the business models of digital platforms. This is further exacerbated by the use of personal and sensitive data by digital platforms, enabling targeted advertising and content distribution across various platforms. The promotion of such content through sponsored and boosted content has a greater impact on reaching internet users.

In response to these issues, the analysis suggests the need for regulatory initiatives and stricter rules in online advertising. It argues that these regulations should consider specific aspects of information flow, the advertising market, and its actors, as well as how the business models of large platforms favor misinformation. The analysis emphasizes the importance of establishing strict measures for transparency and advertisement, as well as the corporate responsibility of intermediaries and links in the advertising chain in relation to the integrity of public debate.

Moreover, the analysis supports the call for more transparency and stricter rules in online advertising. It advocates for the disclosure of the reach and profile involved in advertisements or boosted content, contributing to accountability and limiting the dissemination of false information. The analysis emphasizes the significance of establishing clear guidelines and measures for transparency and advertisement.

Additionally, the analysis highlights the need for locally designed policies to regulate online platforms. It points to the Brazilian Internet Steering Committee’s consultation process on platform regulations, which addressed issues about concentrations in the online advertising market and the risks of the platform business model, such as disinformation and infodemics. This emphasizes the importance of tailored regulations that consider the specific challenges and dynamics of each region.

The analysis also discusses the challenges of conceptualizing political advertisement and the negative impact of advertisements on health. It acknowledges the difficulty in determining whether political party content should be classified as advertisement or not. Furthermore, it raises concerns about the effect of advertisements on health, particularly during the pandemic, emphasizing that misleading advertisements about medicines can negatively affect people’s lives.

Notably, some arguments within the analysis reject the idea of self-regulation in the advertisement sector. They highlight the impact of advertisements on health and emphasize the need for a more serious public discourse on advertisement. They advocate for increased scrutiny and public engagement to address the negative consequences associated with advertising.

In conclusion, the analysis provides insightful observations on the harmful effects of false and misleading information disseminated through the Internet and digital platforms. It emphasizes the need for regulatory initiatives, transparency measures, and stricter rules in online advertising to protect society from the adverse consequences of misinformation. The analysis also highlights the importance of tailored, locally designed regulations and discusses the challenges surrounding political advertisement and the impact of advertisements on health.

Session transcript

Heloisa Massaro:
So, hello, everyone. Hello, everyone who is here and who is watching us online. Thank you very much for being here. I know it’s almost the last session of the almost last day. So, it’s really great to have you here to hear us. So, thank you for everyone and thank you for our panelists, Renata, who is on my right side, Anna, who is on my left side, and Eliana and Herman, who will be joining us online. So, just to give a quick overview on the topic and why did we propose a session with this topic. So, the topic in general, I mean, what do we want to discuss? We want to discuss what is the role of marketing and advertising dynamics and actors on the information environment, which is the risks, which are the implications, and how can we build, and this is the key issue, how can we build best practices and guidelines for the advertisement industry. And I think it’s worth mentioning that this topic, it’s actually appeared for us through a project we developed last year who we called in Portuguese, Desinfo, and I think it’s worth, I realize that I haven’t presented myself. My name is Eloisa. I am director at Internet Lab. Internet Lab is a Brazilian think tank on digital rights and Internet policy, and we developed last year this project called Desinfo, who mapped, who aimed at developing best practices, actually start developing and start this conversation on best practices and guidelines for the advertisement industry, bearing in mind the role of the marketing industry on the information environment. And this is important because the marketing industry has always had an important role in shaping the information environment and influencing it, and it’s interesting to think about it in two levels. When it’s more economical, structural, that the way the commercial marketing structure itself and where it puts money and where it advertises its pieces, it’s normally a key source. The commercial marketing is normally a key source of funding for information for newspapers, and it has always been. But on the second level, there is also the narrative side of it, that marketing upholds and creates narratives that impact the information environment, and it becomes more prominent in the digital area where information, the production of information is decentralized, and new forms of digital marketing and new strategies of digital marketing appear. And there is also, we can say, there is a crisis on the authority of science and journalism. So, with all this together, this theme becomes even more important when we move online. So, during this project, what we did, and I will stop here and pass to our invitees, is that we actually mapped the initial themes and the initial subjects that related with digital marketing, with actually marketing and advertisement in general, and the information environment. Based on that, we workshopped these themes with Brazilian digital marketing actors. This was a collaborative project developed with a marketing agency, and we workshopped these themes, and the goal was to try to understand how this appeared on their daily work, and how we could move towards guidelines and best practices. And the result of this was what we called a working guide for, actually, in development guide for digital marketing, who covers topics such as influencer marketing and the importance of ethical safeguards in good practices when hiring influencers for marketing, social media ads, and website banners, what we call the programmatic ads, and the importance of developing an understandment of how they work, and who will be financed depending on the choices and the structures, and finally, the last topic, the narratives that can be created and fostered by this information, by advertisement campaigns. And I would say that the key takeaway of these workshops that is on the guide, it’s the importance of embedding risk analysis in which regards disinformation and hate speech on the whole process of developing marketing campaigns and advertisement strategies. So, I would say this was a really rich process that we were really able to engage with a lot of marketing actors in the country, but it was, as I said, like a first step. We wanted, like, to open the conversation. And the aim of this panel is, like, to dig into this topic and, like, to create the opportunity to develop further on the challenges and the possible ways to go under this topic. So, I have said enough, and I will pass. What we will do is a first round of five minutes with our speakers, and then we will open the floor, and then we’ll get back. So, first, I want to invite Eliana Quiroz, who is joining us online, and she is a member of the Board of Internet Bolivia. She holds a PhD, she’s a PhD candidate at Universidade Maior de San Andrés, La Paz, focusing on disinformation’s impact on marginalized communities. She holds a Master in Public Administration and has 20 years of experience in international cooperation agencies, including the World Bank and the United Nations. In 2021, she researched disinformation during Bolivia’s political crisis and authored the first academic handbook on Internet and society in Bolivia. And Eliana will give us a brief overview on the role of digital marketing in this information disorder. Eliana, please, the floor is yours.

Eliana Quiroz:
Hi, Eloisa, thank you. Good morning here, and I guess good afternoon and good evening there and everywhere. So, thank you for the invitation, and your introduction was really great because, especially because Brazil is one of the examples of disinformation and campaigns in the world, and you come with examples from grassroots, so from the practice. I want to share some initial thoughts from my research in Bolivia and also trying to understand what are, which are the actors of a disinformation ecosystem. And when we are talking about that, it’s very obvious that private companies are key actors on the disinformation ecosystem. And I’m talking about, of course, marketing companies, which is the focus of this session. But when I’m trying to identify these marketing companies in the practice, the borders of different private companies offering different services blurs. So, we can find, for example, platforms, digital platforms, giving advice on marketing strategies. For example, it’s very well known that when Meta has big clients, let’s say clients that are going to spend one million, five million dollars in their platforms, in Instagram and Facebook, they bring some intermediary between this client, this big client, let’s say, and Meta. And this intermediary is there to help them to micro-target and direct the ads in the best way. And in that moment, this intermediary is bringing some services around marketing, digital marketing strategies. And in Bolivia, for example, this intermediary was a bureau of lawyers. But in Peru, for example, it’s El Comercio. It’s a well-known mainstream newspaper. Or, for example, when I’m talking about blurring these borders that are not so clear, marketing companies offering databases or even data science services. So, my first point here is that when talking about marketing private companies as an actor of this information ecosystem, we are really talking about private entities, different private entities, bringing services and, of course, having an interest of making money out of this business. So, we should try to understand each ecosystem and the practice as they work in each country to identify not only marketing companies, but different private companies or private actors, private entities. The second idea is that these companies have a model, and the model is Cambridge Analytica. So, when you have a lot of money and a lot of interest, you will have the whole model of Cambridge Analytica. And when there is less money or less time, we will have only some parts of this model. And I’m talking about this, thinking about the South. In the South, you will find, for example, some countries that do have a lot of money to spend. So, it’s perhaps the case of Brazil, Mexico, the Philippines. And you will find almost the whole model. But sometimes there are less money. So, you will find perhaps even not marketing companies, but influencers or content creators or what we talk, TikTokers, bringing these services of marketing, digital marketing campaigns. And even also journalists, journalists that are having a very hard time because media shortage, because the media, the model, the business model of media are in crisis. And many journalists are in the streets without employment, but they know about the information flows. So, perhaps we will find some journalists creating some part of the services of marketing digital strategies. So, in the South, I would say, you will find a wide range of marketing services, digital marketing services for campaigns, for disinformation campaigns. So, again, when we are looking at a country and a specific country, in a specific country, it’s good to understand that you will find different actors. A lot of actors playing some part of the roles of the ecosystem of disinformation. I would say that to begin. Then we will dig a little bit more on perhaps the solutions or the way forward.

Heloisa Massaro:
Thank you, Eliana. I will pass now the word to Ana Kompanec, who is here in person with us. Ana is Director for Global Programs at the Center for International Private Enterprise to manage a portfolio of programs spanning emerging and frontier markets around the world in SIPI’s core teams of business advocacy, strengthening entrepreneurship, ecosystem, and institutional trust, economic inclusion, and organizational resilience. Kompanec holds a BA in International Studies from Indiana University of Pennsylvania, a master’s degree in German and European studies from Georgetown University School of Foreign Service, and an MBA from George Mason University. She is a Certified Compliance and Ethics Professional International and a graduate of the U.S. Chamber of Commerce Institute for Organization Management. And Ana will dig into the role of the private sector and how can we work towards developing good practices and accommodations.

Anna Kompanek:
Thank you so much, Eloise. And I feel like I should start with an explanation. You know, when we say private sector in fora like IGF, typically what comes to mind is big tech companies. Here we talked about marketing companies. I want to expand that definition a little bit further and focus on a different segment of the private sector, which perhaps I could call just local business community for more clarity, because that is the segment that my organization, Center for International Private Enterprise, or CIPE, or SIPE, if you speak Spanish, or I guess Portuguese works as well. That is the market segment, if you will, that we work with. And I have to say in conversations about combating disinformation and building healthier information spaces, the role of the local private sector as a sort of stakeholder and potentially an ally is not often talked about. So I appreciate this opportunity. Because ultimately, so you mentioned the marketing companies. Ultimately, there is also a question, what about the companies that pay to have their advertisements placed in different online spaces through the marketing agencies? So what we’re seeing in countries around the world is in many cases, for maybe just an issue of basic lack of knowledge, you know, companies don’t necessarily lack of knowledge, you know, companies don’t necessarily think about how their marketing spends may be contributing to disinformation, because their, you know, basic metric when they buy ads is eyeballs, right? How many eyeballs are seeing this ad? Does it help us generate more sales and so on? But they don’t always consider other risk factors, such as, you know, some of the advertisements, for instance, may appear on websites that are well known to be associated with disinformation or disreputable in some other ways. So there’s just a basic question of sort of sensitizing companies who pay for advertising to think beyond, you know, what are some other ramifications of where that money goes and wherever ads appear? And of course, I want to make it clear, you know, local business community in any country is not a monolith. So companies themselves also may be contributing to disinformation. In many cases, it’s commercially motivated disinformation when perhaps we publish or pay for coverage that is not factually accurate, let’s just say of our competitor. So in some ways, there might be contributors to the disinformation problem. And of course, if our advertising spends supports in directly or indirectly disinformation, that’s a problem. But we are also victims of disinformation, be it through just direct impact on their brand, and also more broadly, through the declining quality of the overall information space. So if the overall quality of journalism in a country suffers, ultimately, those companies may not be able to get, you know, economic information, policy information that’s trustworthy, and that is crucial to their operation. So when we think about who sort of the key ally would be, that doesn’t necessarily mean that every company in a given country is interested in doing something about combating disinformation. They may not be, they may be actively involved in spreading it. In many cases, you have state-owned companies or otherwise politically controlled companies that may also be not so great actors. But I would say there’s a growing segment of companies and they are a worthy ally who recognize the dangers and who also, frankly, see the business case to, you know, improve their own conduct or their own information footprint, if you will, and also to support healthier information spaces, and not just through marketing spend. There are many other ways in which companies can be constructive actors in supporting independent journalism. If we have time and the conversation goes that way, I’ll be happy to highlight other examples. For now, let me just mention that one of the resources that may be of interest to the audience here is a report that my organization and the Center for International Media Assistance, or CIMA, worked on together jointly. It’s called Investing in Facts, how the business community can support healthy info spaces, where we did sort of a global scan of different ways in which private companies can be involved in supporting ethical independent journalism and strong independent media spaces. Ethical advertising is one of those ways but there are others and I’ll be happy to get into that if we have time. Thank you.

Heloisa Massaro:
Thank you Anna. This is really a great point and was actually one of our takeaways also from the project that actually engaging in countering or in ethical information ecosystem is also something that it’s important for the companies and the brands itself because it helps also with their public relations. So thank you. Thank you so much. And now we will go to Herman Wasserman. Herman is professor of the Department of Journalism at Stellenbosch University. He’s joining us online today. He currently holds a professorship in media studies at the University of Cape Town and previously directed the Center for Film and Media Studies. An accomplished alumnus of Stellenbosch University, Wasserman’s academic journal spans esteemed institutions both in South Africa and the United Kingdom. His extensive research in media, democracy and society has earned him international recognition leading to memberships and leadership roles in permanent academic association. Herman, thank you for joining us today and Herman will actually cover for us a little bit today the disparities on the comprehension of this information between the global north and the global south and how this can also impact the discussion we are having here today. So please Herman, the floor is yours.

Herman Wasserman:
Hello everyone. It is a great privilege to be joining you unfortunately not in person but remotely. I have received two questions. Is that correct? The one on the disparities and the second also then on the role of advertisement. So I’ll say something very brief on them both to allow for more time for discussion and questions as this is obviously not the optimal way of making a broad contribution but it’s maybe just some points to consider. So I think in terms of the first question, considering the disparities between global north and south and how this information dynamics manifests in these regions, I think there are two points or two main points to consider in this regard. I think firstly it is that this information has existed in the global south for a long time. We have seen recently that it has become a preoccupation in scholarship and policy debates in the global north. We can track that and we have tracked in our research that scholarly production around this information peaked in 2016. No surprise why that is the case around the elections in the US at that point. And from then on it grew very steeply in terms of scholarly research. But when we actually consider the presence of what we now call this information, it is on a continuum with communication strategies, types of communication, propaganda even that have been present in the global south for a long time. And not only disinformation but also the other related issues such as the pressure on the information environment, the pressure on free and accurate exchange, the pressure on the public sphere, all of these things that we now associate with what has been come to call the information disorder. These discourses and these trends have been in the global south for a long time. One could even say I think that the discourses that kept colonialism in place were often a type of disinformation that served to justify the subjection of the colonized. And then in the post colonial era, if we look at my continent, Africa, it is very clear, for instance, even in the post colonial era that states have often limited critical voices by owning and controlling the media, controlling the public sphere, engaging in disinformation campaigns. So what we are seeing today, when we are again seeing that governments in Africa and elsewhere use the excuse of fake news to enact laws that criminalize dissent, there’s a continuity that is important to note. I think it’s really important that we see this in the global south in the historical, in the long historical moment. Also, if we look at foreign influence operations today, I think it is important to recognize that, that there have been foreign influence operations now often draw on historical loyalties, historical presences in Africa, and that there’s this longer historical view. So that’s, I think, the first point to make is that there’s a continuity that we shouldn’t see disinformation in the global south, certainly as something that is entirely new. Um, and that there is also the, uh, we have to understand with within a longer historical perspective, even if these trends and forms of disinformation are now facilitated and amplified through new technologies. It is a continuation of an older threat. I think the second point maybe to make is that we now see a double threat to the information landscape in the global south, both externally by foreign influence operations and internally by repressive states. And that threat is a threat. Also, um, to the threat to the information landscape more broadly, but it is critically also a threat to journalism and to free journalism in the region. Um, and I will get to that when I make a few points about the information environment in the role of advertisement. But I think it is also important to note that, um, citizens and audiences in the global south have agency, and it is important when we think about this information in these contexts that we also recognize and be very alert to the agency that audiences and citizens, um, have and the ways that they are practicing that agency because that can also hold a lesson for the global north. One of the things one of the points that we’ve made in our research is that, um, we should really encourage more attention to the global south and disinformation in the global south, not merely because the global south is important or because Um, you know, more attention should be paid to it. But because there are actually lessons to be learned from the global south experience that can be useful for the global north. And one of these is the way that, um, citizens and activists and organizations, civil society movements in the global south. Um, are using that agency to fight this information through various strategies. One of the interesting strategies that we’ve seen in the research that Internet lab has also been involved with this project that I lead, um Is how the fight against this information in the south is linked with other struggles so that the fight against this information is not seen in isolation, but it is linked with struggles such as the struggle for access Internet access Internet digital rights, media freedom, education and so on. And you know, if we have time, I can elaborate on that maybe in question time, but I mean, there are clear examples of how the organizations and activists in the global south see the country of disinformation as part of broader struggles. Um so activists in the global south know that to empower citizens to stamp out this information, these citizens need access to the Internet. For instance, they need digital rights. They need freedom of expression. They have to have a good basis of media literacy, etcetera. So these struggles are often linked. And when we approach disinformation, the global south, it becomes very clear That we cannot find this information in isolation. We have to see it as part of this broader ecology, broader array of rights and struggles. So if we if I can move on then quickly to the questions of the role of advert advertisement for the information environment and what implications these disparities might have for addressing and mitigating this information. Um I would maybe like to again return to the focus on journalism. Um if we think that critical independent journalism is one of the most important tools we have to fight against this information in the Global south. We also have to think about the threat of disinformation as linked to the threats to journalism in the global south. One of the major threats as I’ve already alluded to is ongoing state pressure and repression. Uh, this is not a new trend. This has been going on for many, many years. But what is particularly pernicious at the moment is that maybe ironically, states are using the fight against disinformation to enact fake news laws, and that if we’ve seen across the south, but especially also in Africa, but I’m more familiar with Um, the fight against this information has become a smoke screen for further oppression, and that has become a very pernicious and a very Important thing to focus on. Um, but when you look at advertising and marketing again, I think there is a double edged sword or maybe a double two sides of the coin. Um, if we look at the role of advertising in relation to journalism, if we take journalism as a as a key component, um, as a key guarantee or a key Um, uh, weapon again in this fight against this information. Advertising can be part of the problem. I think that is we are familiar with those issues misleading advertising advertising that might Look like journalism, but is in fact, um, you know, marketing. The very fact that business models can promote a certain type of journalism that is sensationalist, um, that promotes click bait. That, um, focuses maybe only on elite audiences and leaves large parts of highly unequal societies without access to media agendas. All of these aspects of advertising and marketing in relation to journalism. I think we are familiar with and can promote can create problems, um, in terms of journalists ability or journalism’s ability to fight this information. But I think an aspect that we often lose sight of is that advertising is also important for organized news organizations in in the South, especially when it comes to small independent media outlets where the state owns and controls many media outlets. These small independent media outlets are often, um, under severe economic threat. We’ve seen during the covert pandemic. Um, how many smaller community organizations, community media, um, independent media and on the continent have had to close down. Or had to severely scale back their operations. And in this regard, advertising and I can actually be in a way for smaller community outlets to sustain themselves. That’s obviously not the only model. There are donor based models and philanthropic Models and so on that are really important to explore. But advertising is one of those avenues. And then I think what we increasingly hear from these news organizations is that the way that advertisements in the online environment are sucked up by big platforms like Google. Um, the result is that local news outlets lose an important source of revenue or get a very small part of revenue, and that threatens their sustainability. Another aspect to point to is that the precarious economic environment large parts of the global South also means that companies often cut back on advertising. So with whenever there’s an economic downturn, whether there’s an economic pressure, and that is something that the global South, um, characterizes the global South almost universally under such pressure, um, advertising dries up. Um, and and so that also becomes a problem for, um, a lot of Problem for, um, news outlets, and then often opens the door for more. Um, sort of a capture of these news organizations by those people that have money and influence and renders them more vulnerable. To disinformation. So I think when we look at advertising in the global South, we have to, um And it’s relation to disinformation and journalism. We have to understand that it’s a it’s a complex issue that there are different aspects to consider. Um, and that one has to take context into account. I think throughout uh, when we when we study disinformation, the global South, um, throughout the global South, it’s clear that context is increasingly or is incredibly important and that we cannot just import models of understanding and analysis from the global North to understand the problem in the global South. We have to look at this problem within context and within the local specificities. So I’ll leave that there. Um, those are my initial comments and be happy to hear any questions or feedback. Thank you.

Heloisa Massaro:
Thank you so much, Herman, for the great overview, and I will pass now to Renata Mielli. Renata is journalist with a bachelor’s degree in social communication from Faculdade Casper Libero. She’s currently pursuing her doctorate in communication science program at the School of Communication and Arts at the University of Sao Paulo, and she holds the distinction of being the first female coordinator of the Brazilian Internet Steering Committee, the CGI, a multi-stakeholder entity responsible for, among other duties, for establishing strategic guidelines related to the use and development of Internet in the country. So, Renata, please.

Renata Mielli:
Thank you. Thank you, Heloisa. Thank you for the Internet Lab to the invitation for this section. I think this dam is very important. In Brazil, we are discussing this a very, very long time. Well, I have some notes here and a few reflections about this problem. The massive dissemination of false and misleading information news has currently drawn attention to the harmful effects it has produced in society. The challenges in developing actions, too, on the one hand, protect fundamental rights such as freedom of expression, privacy, and access to information, and, on the other hand, preserve the respect for cultural diversity its paramount. This information isn’t a phenomenon as old as the history of the press. Historically, the content value chain has been dependent to a greater or lesser extent on the sale of advertisements. Advertising has played a role not only in promoting journalism but also in promoting access to information. Concerns related to the independence of news production and the use of advertising funds to manipulate public opinion are also not new. The Internet, however, has allowed the dissemination of false and misleading news and information to reach unimaginable levels, and its negative effects on society have become even more severe. Understanding this phenomenon necessarily involves understanding the emergency of a network of motivations for the creation, dissemination, and consumption of false and misleading content that amplifies information disorder and is related to the business models of digital platforms. In this sense, the use of the term disinformation industry is appropriate to describe the continuous increase in complexity and size of production chains and networks of factors that emerge as stimulated by high financial investments mostly funded by advertising. Digital platforms have captured an important part of the advertising market, amplifying content through the use of personal and sensitive data. An important part of this content is misleading, false, harmful, and illegal. Research has suggested that content with demonstrably false information circulated more than verified content feeding digital platforms’ business models. Content moderation regulation faces issues as the profound lack of transparency on the development of advertisement and the algorithms that showcase them. Beyond that, intermediary liability regimes, based on the principle of non-liability of the networks, are bringing questioned rising issues yet to be settled. As sponsored and boosted content has greater capacity to reach internet users across different platforms, it is fundamental to investigate the damage it causes to the production of information and news and the role advertisement plays in these processes, especially in a scenario of massive collecting and use of personal data to profile users and target propaganda. Regulatory initiatives need to take into account both the specific aspects of information flow, the advertising market and its actors, as well as how the business models of large platforms favor this information. In order to define strict policies that enable a health informational environment, some directives may be considered. Regulating the role of influencers in programmatic media, this is a very big problem we have. Influencers now have more audience than newspapers and journalists. Establishing strict measures for transparency and advertisement, also considering sponsored and boosted content in social media, such as advertising libraries served by digital platforms and disclosure of the reach and profile involved in the ad or boosted content. I think corporate responsibility of each intermediaries and links in the advertising chain in relation to the integrity of the public debate, as suggests in the booklet formulated by the Internet Lab, called Public or Fake or Ad or Fake. Other initiatives we hope may be proposed in the discussions carried out in this session. Finally, the Brazilian Internet Steering Committee carried out a broad consultation process on platform regulations, which, among other issues, involved questions about concentrations in the online advertising market and the risks of the platform business model, such as disinformation and infodemics. The consultation has more than 20,000 contributions from individuals and organizations of different sectors of society. The analysis of its results is still going on, and we hope that it can be of great value for the formulation of an innovative and locally designed policy. That’s my first. reflections. Thank you for the opportunity.

Heloisa Massaro:
Thank you Renata. Now we are going to open the mic for not only questions but also considerations and comments and thoughts. So we please those who want to make an interventions and are here, use that mic over there. And for those who are online you can either send this via Q&A or perhaps raise your hand and we can monitor for allowing you to intervene.

Audience:
So any? I guess I’ll ask a question if people don’t want to ask questions. So I’m from the National Democratic Institute, Dan Arnato. I’m curious, you didn’t talk too much about political advertising and that’s a lot of what, you know, we engage in monitoring at NDI and other election observation groups. So I’m curious, you know, the role particularly of political advertising and how that could, you know, better be managed using different kinds of interventions, whether they’re legal, you know, mechanisms to control that, whether they’re monitoring systems. I think Cambridge Analytica, as you mentioned, like really demonstrated some of the challenges we have in terms of data that could be used for targeting and just it’s a problematic component because I think that kind of information is useful for research and for other purposes, but it is unfortunately, you know, a problematic component of kind of our modern political systems that these systems can really become weaponized. So curious about your perspective on that piece.

Heloisa Massaro:
Thank you, Daniel. We have one more.

Audience:
My name is Juliano. There is a difficulty in separating the acting of digital influencers in their own work and as political marketeers. Part of advertisement funds is dedicated to political campaigns. So I’d like to hear from the panelists a little bit of how could we develop a kind of regulation that could look into how advertising are fomenting disinformation in political campaigns as it’s so difficult to separate political content than other all kinds of content that are circulating in the Internet. Thank you. Hi, my name is Juliana. I’m a technical advisor from the Brazilian Internet Steering Committee and in Brazil we have a self-regulating council for government advertisement. So I would like to know how the measures to mitigate those risks mentioned can articulate with the self-regulatory frame and with the regulatory from the state. Maybe, for instance, if we could demand more transparency from this influencers through the self-regulatory council or maybe we have too many problems like regulatory capture in these councils and a lot of other difficulties. And if we can adapt these spaces and advance or maybe we should trust in regular regulation and this I think how these things interact. Thank

Heloisa Massaro:
you. Thank you Juliana. Anyone else or do we get back to the panel? Okay, so we are getting back to the panel and before passing the word to my colleagues I would actually like to add something to Juliana’s question which was really great and mentioning that during the project we were developing we actually mapped some of the measures, this kind of self-regulatory bodies for advertisement and how they interact with these issues and it’s interesting that normally there is a couple of safeguards in place or self-regulatory norms that targets well what would be disinformation on the narrative at the level of misleading consumers but when you go beyond that when the narrative it’s the problem with the narrative is less about the project and more about how it can uphold other types of disinformation or even hate speech depending on how you build the narrative and when you’re speaking about how advertisement may finance or be a source of funding for disinformation outlets then there is a limit on what we have until today for the self-regulatory bodies and I think this is one of the challenge how do we think this way forward is this something that should we cover with regulatory approaches like state regulatory approach and now in Brazil and Renata can speak more about that than me we are discussing this on the fake news build with the platform regulatory build that has something on a advertisement but there is a lot there is a long way to go and there is this space that there’s not so many parameters and safeguards and I will stop here to let my colleagues speak and I will go actually backwards now so I’ll pass first to Renata and then I will go with Herman and Anna and Eliana.

Renata Mielli:
Well, three very good questions I cannot answer all of them but just a reflection because this challenge into how we can conceptualize political advertisement is very difficult because it’s a very thin line between the freedom of expression, the freedom of flow of the ideas, the political ideas so conceptualize this is very difficult and is a challenge to address some good practice to avoid disinformation in this but we are all dealing with that in Brazil we have passed through two elections when the flow of disinformation content in political debate was enormous but I think it’s very difficult to categorize political advertisement. What is this? We are talking about when a political party do some content this is advertisement or not? Just a question for our reflection how we how we manage with this so this is a very big problem and it’s not easy to to face it another comment is about what Juliana bring to us and I deal with that before working with internet when I am from civil society discussing the democratization of communication in Brazil and our private sector on advertisement always said that there is a new right that we have to put on human rights I don’t know that is the advertise how can I say that advertisement free yes I don’t know free speech of advertisement as a new whole in the human rights so they use this expression how did you translate free the free speech of our advertisement they use this to avoid any kind of regulation and I myself I don’t believe in self-regulation in the sector I think we need another kind of approach of course there is an importance of this kind of structure but we have to have a space and public space to discuss advertisement in a more serious way and we didn’t talk about this but political is a problem but we have problems with health when we have advertisement about medicines and we have problems we saw this in the pandemic and this is a very big problem because this affected people’s lives so that’s only a few comments

Heloisa Massaro:
thank you so much Renata and now back to Herman who is online

Herman Wasserman:
I won’t say much more than the previous speakers have said because I think you know a lot of that resonates in the South African context we do have regulations for political advertising but they come from a sort of previous era pre-social media really so the advertising of political parties prior to elections on say broadcast channels and newspapers that’s fairly well regulated but what happens on social media I think is less easily regulated also just to confirm with the previous speakers what we define by advertisements increasingly political parties are using all sorts of other ways of guerrilla marketing and things of campaigns that you know is not as easily definable in this regard I would say that what is important is regulation but even maybe more importantly is the sort of coalition of journalists and civil society organizations to also interrogate what political parties are saying to fact check their claims to make audiences aware of the source of claims and campaigns and marketing strategies so I think regulation in itself formal regulation is not enough it is important that this is also forms part of a broader let’s say awareness raising and a broader systemic orientation towards political communication from journalists, civil society organizations etc.

Heloisa Massaro:
Thank you Herman and now back to Anna.

Anna Kompanek:
So I won’t necessarily comment on political advertising since that’s not specifically the issue we’re looking at but I just wanted to re-emphasize the point that Herman made in his earlier remarks that independent journalism is the key weapon in combating disinformation and speaking from the perspective of the private sector as I said there are many ways that local private sector local businesses can help invest in healthy infospace in independent journalism beyond ethical advertising through impact investment or blended finance or corporate philanthropy or thinking about it as a part of their CSR and with that corporate mindset of thinking about their sort of impact in the information space you know we do see local private sector also involved in you know just having a voice as policies that govern information space may be made you know in Armenia for instance we work with a local business organization that has provided input into national strategy against disinformation so kind of just a broader principle that you know whatever laws are being passed the biggest danger is the government just passing the law without any kind of consultation on input from civil society and also from from the local private sector and there is also a value that we see in bringing local business organizations together as part of the broader coalitions that were mentioned before to talk about the the issue of securing the information space mapping it out thinking about incentives for private sector investment in independent media and in our work we see that for instance in the Philippines where we help bring together the Philippine Association of National Advertising and the Makati Business Club which is one of the major business organizations in the country to talk about this particular issue which may not necessarily be kind of a natural topic for for entities like that so we’re just you know let’s be creative about which stakeholders are involved and what collaborations are possible.

Heloisa Massaro:
That’s really interesting thank you Anna and now back to Eliana.

Eliana Quiroz:
Thanks building on Anna’s response also yes I guess it’s a key to understand which actors are playing so I would say taking the question about political advertising it will be like following the money and also following or understanding which entities are part of the of this ecosystem so to bring some transparency on which actors are participating and bringing transparency when I’m thinking about bringing transparency for example is to understand in a specific election for example we should know which marketing companies are taking part which are contracted by which political party for example but not only marketing companies but also for example influencers there are many influences that are contracted by political campaigns so it’s good to know who are bringing some information paid and not only like advertising but advertising like directly in the platforms but also understanding which data providers we have there which data analyst companies are playing some kind of role which audiovisual media production companies communication digital and public relations companies and also fact checking and public opinion companies that are bringing some services during an election and I’m thinking about during an election because it’s very it’s delimited it’s kind of a special moment it’s not like the broader or any time in the in the political life but it’s a very specific and it’s possible to bring some regulations by the authority the electoral authority and the second idea there could be that it’s interesting like to understand that some companies are not really aware of some human rights framework like business and human rights framework and it’s very good to include them into the conversation and bring some knowledge about these frameworks to let them know that what is allowed when it’s not allowed and then of course I really do think regulation is part of the solution but it’s very we know that it’s very complicated because we have also to take care of freedom of expression but yet regulation not only to the platforms but also to some actions of other companies are part of the solution and I will stop there.

Heloisa Massaro:
Thank you Eliana and we reached our limit of time so I would like to thank you all our panelists today I think it was a really interesting discussion and I think there are some takeaways or at least some points we can map from this discussion that actually not only the the difficult of defining political advertisement but how when there is a blurred line also between what is commercial advertisement and political advertisement and we have seen this in Brazil in the last election when we do have brands that are engaging politically and where the line of free speech also can be drawn or cannot be drawn and also the importance not only of advancing regulation but also of engaging different actors because despite the fact that we may have actors that have bad intentions within the ecosystem we may have we actually have a large amount of actors that are there to be engaged and to be included in the conversation of business and human rights so I would like to thank you everyone for being here today and thank you for everyone who stayed at almost seven o’clock today with us and I hope you have a good rest of IGF and a good rest of Wednesday.

Anna Kompanek

Speech speed

145 words per minute

Speech length

1143 words

Speech time

473 secs

Audience

Speech speed

114 words per minute

Speech length

398 words

Speech time

209 secs

Eliana Quiroz

Speech speed

124 words per minute

Speech length

1066 words

Speech time

514 secs

Heloisa Massaro

Speech speed

130 words per minute

Speech length

2104 words

Speech time

975 secs

Herman Wasserman

Speech speed

166 words per minute

Speech length

2249 words

Speech time

812 secs

Renata Mielli

Speech speed

109 words per minute

Speech length

1152 words

Speech time

635 secs

Exploring Blockchain’s Potential for Responsible Digital ID | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Judith

Vicky expresses gratitude and greets the audience, creating a positive and welcoming atmosphere. The speaker’s tone and appreciation set the stage for an engaging interaction.

Joey

The project had several positive outcomes for Ugandan students. Firstly, it provided exposure to technology and hands-on experience. Students had the opportunity to interact with students from Japan, which not only helped them develop their cross-cultural skills but also sparked an interest in technology. This exposure to different cultures and technology is important for their educational development and future career prospects.

Furthermore, the project had a significant impact on language and social learning. Students were able to engage in interactive language practices and received artistic feedback on their language skills. They also had the chance to express themselves in both Swahili and English. This not only improved their language proficiency but also facilitated their social and emotional learning.

However, the project faced challenges in integrating technology due to limited resources and budget constraints. The local setup, Gudu Samaritan, struggled to invest in technology because of these constraints. This highlights the need for adequate funding and resources to ensure the successful integration of technology in education.

Another obstacle was the unstable internet connection, which hindered online participation. This limited students’ ability to fully engage in online activities and access educational resources. Stable and reliable internet connection is crucial for effective technology integration in schools.

Regarding curriculum integration, there is a need to engage with the Ministry of Education. Engaging with the Ministry would ensure better resource allocation and adjustment of teaching methods to effectively integrate the project into the curriculum. This collaboration is necessary for the long-term sustainability and impact of the project.

Funding was deemed crucial for projects that integrate technology into schools. The government should provide infrastructure, such as a stable internet connection, for successful implementation. Additionally, schools like Gudu Samaritan require resources like an intelligence system, robots, and computer equipment to fully leverage the benefits of technology in education.

Another important aspect is promoting literacy in online platforms. All students and teachers should be literate in AOL (online platforms). This would ensure equal access to information and opportunities. Educators should be given the opportunity to participate in online workshops and training to gain confidence in incorporating technology in their everyday teaching.

In conclusion, the project had various positive impacts on Ugandan students, including exposure to technology, cross-cultural interaction, and development of language and social skills. However, challenges such as limited resources, budget constraints, unstable internet connection, and the need for curriculum integration must be addressed for the successful integration of technology in education. Adequate funding, collaboration with the Ministry of Education, and promoting literacy in online platforms are essential for the continuation and growth of such projects.

Ruyuma Yasutake

The HARU project has had a positive impact on English conversation classes, enhancing the overall learning experience. HARU, an advanced AI-based interactive robot, helps to create smoother and more engaging conversations by responding to moments of silence and using interesting facial expressions. This not only makes the conversations more enjoyable but also creates a dynamic learning environment. The use of HARU has also facilitated cross-cultural interaction by connecting students from different countries. This provides a unique opportunity for meaningful conversations and a better understanding of different cultures. While there have been some challenges, such as system troubles and interruptions in interactions, the overall experience has been positive. HARU also offers the opportunity for students to interact and work with professional international researchers, which enhances their learning. Furthermore, HARU has the potential to connect students from different countries, promoting global collaboration in education. Additionally, HARU can be used as a partner for practicing conversations, allowing students to improve their conversation skills in a supportive environment. The use of AI’s evaluation system in education also holds promise for fairer assessments, reducing biases and promoting fairness. In conclusion, HARU has numerous benefits and, with further advancements and improvements, has the potential to revolutionize education and communication.

Randy Gomez

The Honda Research Institute, led by Randy Gomez and his team, responded positively to UNICEF’s call to implement and test policy guidance. They dedicated a significant portion of their resources to developing technology for children, with a focus on creating a system that enables cross-cultural interactions among groups of children from different countries. This system involves a robot facilitator that connects to the cloud, allowing children to interact regardless of their geographical locations.

The team conducted experiments using interactive games facilitated by the robot to evaluate the effectiveness of their technology in promoting cross-cultural communication. The results were overwhelmingly positive, demonstrating the efficacy of the technology in enabling these interactions.

In addition to developing the technology, the team recognized the importance of understanding its societal, cultural, and economic impact on children from diverse backgrounds. They deployed the robots in hospitals, schools, and homes to gather insights into implementing the technology in different settings. They collaborated with Vicky from JRC and applied their application alongside IEEE standards to ensure industry compliance.

Overall, the Honda Research Institute’s work contributes to the United Nations’ Sustainable Development Goals, specifically in reducing inequalities, ensuring quality education, and promoting industry, innovation, and infrastructure. The technology they developed for cross-cultural interactions among children fosters understanding and connectivity. It has the potential to create a more inclusive and globally connected society, while also shedding light on the societal, cultural, and economic effects of robotic technology on children’s development.

Steven Boslow

Artificial Intelligence (AI) technology is increasingly present in the lives of children, being used in areas such as gaming, education, and social apps. These AI systems have the power to influence significant decisions, including those related to health benefits, loan approvals, and welfare subsidies. However, it is concerning that most national AI strategies in 2019 did not adequately consider children as stakeholders. This lack of recognition of children’s rights in AI policies highlights the need for improvements.

Moreover, the existing ethical guidelines for AI do not sufficiently address the unique needs of children. These guidelines are not specifically tailored to tackle the challenges and risks that children may face with AI technologies. This oversight is worrisome, considering the substantial impact that AI can have on children’s lives.

On a positive note, UNICEF, in collaboration with the Finnish Government, took an initiative in 2019 to address this issue by introducing policy guidance on AI and children’s rights. This guidance aims to provide a framework for responsible and ethical use of AI concerning children. Several organizations have since implemented these guidelines and shared their experiences and lessons learned. The implementation of UNICEF’s guidelines is a crucial process in safeguarding the rights and well-being of children in the context of AI.

Recognizing the fact that children make up approximately one-third of all online users and an even higher proportion in developing countries, it becomes evident why prioritizing children’s rights is essential. While AI presents great opportunities, it also poses significant risks for children. Therefore, it is important to establish robust regulations that effectively protect their rights while enabling the positive utilization of AI technology.

In conclusion, the increasing presence of AI in children’s lives emphasizes the need for them to be recognized as key stakeholders in national AI strategies and ethical guidelines. UNICEF’s efforts to develop and implement guidelines specifically addressing AI and children’s rights are commendable. They highlight the importance of prioritizing children’s needs and ensuring their protection in the development of AI regulations. To ensure a safe and beneficial AI environment for children, continuous improvement of policies, guidelines, and regulations that cater to their unique requirements is essential.

Moderator

According to the analysis, children were not adequately recognized in national AI strategies or ethical guidelines for responsible AI. This lack of recognition raises concerns about the potential negative implications AI could have on children.

One of the key findings is that AI is increasingly being used in education and gaming, indicating it has become an integral part of children’s lives. Given the significant number of children who are active online users, particularly in developing countries, the impact of AI on their lives cannot be ignored.

Furthermore, the analysis highlights that adopting responsible AI or technology can be challenging. Applying principles for responsible AI can cause tensions to arise, and the context in which these principles are applied is crucial. Developing effective regulations and policies concerning AI requires careful consideration of the specific needs and vulnerabilities of children.

The analysis also emphasizes the importance of prioritizing the role of AI in children’s lives when it comes to regulation and policy-making. It highlights the potential risks AI poses, such as providing poor mental health advice or infringing on children’s privacy. These risks underline the urgent need to establish robust guidelines and safeguards to protect children’s well-being and rights in the context of AI.

Additionally, the Honda Research Institute’s development of robotic technologies for children in response to UNICEF’s call for policy guidance implementation and testing is noteworthy. This initiative demonstrates the commitment to address the specific needs and challenges faced by children in an increasingly AI-driven world.

Collaboration between urban students from Tokyo and rural students from Uganda was a significant aspect of the analysis. This collaboration aimed to enhance intercultural understanding and explore the variations in children’s rights comprehension across different situations. This emphasizes the importance of context in comprehending and addressing children’s rights issues.

Moreover, the role of technology in education was found to have a positive impact on students’ understanding and interest. The projects analyzed contributed to the development of social and emotional skills, further reinforcing the potential benefits of integrating technology in educational settings.

However, the analysis also identified several challenges. Limited resources and budget constraints were major obstacles, particularly in the context of a local setup called Gudu Samaritan in Uganda. These constraints made it difficult to invest in technology and maintain stable internet connections, hindering the implementation of projects.

To overcome these challenges, the analysis suggests engaging the Minister of Education in Uganda to integrate the project into the curriculum and secure additional resources. This approach would not only address budget constraints but also provide the necessary time and support to adapt teaching methods effectively.

In conclusion, the analysis highlights the need for greater recognition of children in AI strategies and ethical guidelines. It underscores the importance of considering the specific needs and vulnerabilities of children when developing regulations and policies related to AI. The potential risks associated with AI, such as issues related to mental health and privacy, call for the implementation of comprehensive safeguards. The analysis also sheds light on the positive impact of technology in education, particularly in enhancing students’ understanding, interest, and social and emotional skills. However, challenges such as limited resources and budget constraints must be addressed through collaborative efforts involving government bodies and educational institutions. Overall, a comprehensive and child-centric approach to AI and technology adoption is essential to ensure the well-being and rights of children in the digital age.

Session transcript

Moderator:
So, welcome to our session on UNICEF implementation, UNICEF policy guidance for AI and children’s rights. This is a session where we are going to show how we, our team, extended team, tried to implement some of the guidelines that UNICEF published a couple of years ago. I would like to welcome, first of all, our online moderator, Daniela DiPaola, who is a PhD candidate at the MIT Media Lab. Hi, Daniela. And she’s going to help for the online and the decent speakers. And here we have also, I would like to invite Steven Boslow and Randy Gomez to come, our mic organizers, to come and on the stage and we can set the scene to start the meeting. Thank you. So first, let me introduce Steven Boslow. Steven is a digital policy innovation and ad tech specialist with a focus on emerging technology. And currently, she’s a digital foresight and policy specialist for UNICEF based in Florence, Italy. Steven was the person behind the policy guidance on AI and children’s rights at the UNICEF. And Steven, you can probably explain more about this initiative. Thank you.

Steven Boslow :
Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a digital policy specialist, as Vicky said, with UNICEF. And I’ve spent my time at UNICEF looking at the intersection mostly of emerging technologies and how children use them and are impacted by them and the policy. So we’ve done a lot of work around AI and children. Our main project was started in 2019 in partnership with the government of Finland and funded by them. And they’ve been a great partner over the years. So at the time, 2019, AI was a very hot topic then as it is now. And we wanted to understand if children are being recognized in national AI strategies and in ethical guidelines for responsible AI. And so we did some analysis and we found that in most national AI strategies at the time, children really weren’t mentioned much as a stakeholder group. And when they were mentioned, they were either needing protection, which they do, but there are other needs, or thinking about how children need to be trained up as the future workforce. So not really thinking about all the needs, unique needs of every child and their characteristics and their developmental kind of journey and their rights. So we also looked at ethical AI guidelines. In 2019, there were more than 160 guidelines. Again, we didn’t look at all of them, but generally found not sufficient attention being paid to children. So why do we need to look at children? Well, of course, at UNICEF, we have our kind of guiding roadmap is the Convention on the Rights of the Child. The children have rights. They have all the human rights plus additional rights, as you know. One third of all online users are children. And in most developing countries, that number is actually higher. And then thirdly, AI is already very much in the lives of children. And we see this in their social apps, in their gaming, increasingly in their education. And they’re impacted directly as they interface with AI, or indirectly as algorithmic systems kind of determine health benefits for their parents, or loan approvals or not, or welfare subsidies. And now with generative AI, which is the hot topic of the day, AI that used to be in the background has now come into the foreground. So children are interacting directly. So very briefly, at the time after this initial analysis, saw the need to develop some sort of guidance to governments and to companies on how to think about the child user. And as they develop AI policies and develop AI systems. So we followed a consultative process. We spoke to experts around the world. Some of the folks are here. And we engaged children, which was a really rich and necessary step. And came up with a draft policy guidance. And we recognized that it’s fairly easy to arrive at principles for responsible AI or responsible technology. It’s much harder to apply them. They come into tension with each other. The context in which they’re applied matters. So we released a draft and said, why doesn’t anybody use this document? And tell us what works and what doesn’t. And give us feedback. And then we will include that in the next version. And so we had people in the public space apply it, like YOTI, the age assurance company. And we also worked closely with eight organizations, two of them are here today, Honda and JRC, Honda Research Institute and JRC, and MEC3D. And Judith is on her way. And basically said, apply the guidance. And let’s work on it together in terms of your lessons learned and what works and what doesn’t. So that’s what we’ll hear about today. It was a really real pleasure to work with JRC and Honda Research Institute and to learn the lessons. And so, yeah, just in closing, AI is still very much a hot topic. It’s an incredibly important issue to get right or technology to get right. It is just increasingly in the lives of children, like I said, with generative AI. There are incredible opportunities for personalized learning, for example, and for engagement with chatbots or kind of virtual assistants. But there are also risks. That virtual assistant that helps you with your homework could also give you poor mental health advice. Or you could tell it something that you’re not meant to and there’s an infringement on your privacy and on your data. So as the different governments now try to regulate AI and regional blocks and the UN trying to coordinate, we need to prioritize children. We need to get this right. There’s a window of opportunity and we really need to learn from what’s happening on the ground and in the field. So yeah, it’s a real pleasure to kind of have these experiences shared here as bottom up inputs into this important process. Thank you.

Moderator:
Thank you so much, Stephen. Indeed. And at that point, we had already some communication with UNICEF through the JRC of the European Commission, but already we had an established collaboration with the Honda Research Institute in Japan, evaluating the system in different technical, from a technical point of view, trying to understand what is the impact of robots on children’s cognitive processes, for example, or social interactions, et cetera. And there is an established field of child-robot interaction in the wider community of human-robot interaction. And that was when we discussed with Randy to apply for this case study to UNICEF. And I think Randy now, he can give us some of the context from a technical point of view, what this meant for the Honda Research Institute and his team. Randy?

Randy Gomez:
Yeah. So, as what Stephen mentioned, so there was this policy guidance and we were invited by UNICEF to do some pilot studies and to implement some and test this policy guidance. So that’s why we, at Honda Research Institute, we developed technologies in order to do the pilot studies. So our company is very much interested with looking into embodied mediation where we have robotic technologies and AI embedded in the society. And as I mentioned earlier, as a response to UNICEF’s call to actually implement the policy guidance and to test it, we allocated a significant proportion of our research resources to focus into developing technologies for children. In particular, we are actually developing the embodied mediator for cross-cultural understanding where we developed this robotic system that facilitates cross-cultural interaction. So we developed this kind of technology where you have actually the system connect to the cloud and having a robot facilitates the interaction between two different groups of children from different countries. Before we do the actual implementation and the study for that, through the UNICEF policy guidance, we tried to look into how we could actually implement this and looking into some form of interaction design between children and robot. So we did deployment of robots in hospitals, schools, and homes. We also look into the impact of robotic application when it comes to social and cultural economic perspectives with children from different countries, different backgrounds. We also look into the impact of robotic technology when it comes to children’s development. So we tried some experiments with a robot facilitating interaction between children and some form of game kind of application. Finally, we also look into how we could actually put our system and our pilot studies in the context of some form of standards. So that’s why together with JRC, with Vicky, we look into applying our application with the IEEE standards. And with this, we had a lot of partners, we built a lot of collaborations, which are here actually and we are very happy to work with them. Thank you.

Moderator:
Thank you so much, both of you. So this was to set the scene for the rest of the session today. So as Randy and Stephen mentioned, this was quite a journey for all of us and around this project there are a lot of people, a great team here, but also 500 children from 10 different countries where on purpose we chose to have a larger cultural variability. So we have some initial results and for the next part of the session, we have invited some people that actually participated in these studies. So thank you very much, both of you. And I would like to invite first Ruma. Ruma is one of the students that … Thank you. Ruma, you can come over. Ruma is a student at the high school here in Tokyo and you can take a seat if you want here. Yeah, that’s fine. And he’s here with his teacher and our collaborator Tomoko Imai. And we have online also Joey. Joey is a teacher at the school in Uganda where we tried to implement participatory … Action research, which means that we brought the teachers in the research team. So for us, educators are not only part of the end user studies, but also part of the research. So we interact with them all the time in order to set also research questions that come directly from the field. So we are going to start. You can sit here. Do you want? Or you want to stand? Whatever you want. Sure. Sure. So we have three questions for you first. We would like first to tell us about your experience in this process, participating in our studies.

Ruyuma Yasutake:
We have online English conversation classes once per week in the school. But we often have some problem in continuing the conversation. With our participation in the HARU project, we had a chance to talk with children from Australia with the help of HARU and this made somehow different. For example, sometimes there was a moment of silence. But HARU could feel these moments and made conversation smoother. Also, during the conversation, HARU would make interesting facial expressions and make conversation fun for us.

Moderator:
During the project, we had a chance to design robot behaviors. And we interacted with engineers, which was really nice. Yeah. And during the project, probably you faced some challenges. I mean, there were some moments where you thought that, oh, this project is very difficult to get done. Do you have anything to tell us about this?

Ruyuma Yasutake:
The platform is still not stable. And sometimes there was system trouble. For example, once robot was overheated and could not cool down. So HARU stopped interaction and started again. But overall, the experience was positive because I had a great time talking with professional researchers who are trying to fix the problem. Being able to work with international researchers, it was very valuable experience for me.

Moderator:
Thank you, Rima. And do you want to tell us how would you imagine the future of education for you? I mean, through your eyes, you’re now in education. So if in the near future, you have the possibility to interact more with robots or artificial intelligence within the formal education, how these would look like for you?

Ruyuma Yasutake:
I hope that HARU can help connect many students in different countries. And robot can be a partner to practice the conversation by taking different roles, like teachers, friends, and so on. And probably, use of AI’s evaluation system can be more fair.

Randy Gomez:
OK. So thank you very much, Rima. This was an intervention from one of our students. But next time, probably, we can have more of them. And now I would like you can probably see. Yeah. Thank you. Thank you. You can go. You can take a seat there. I’ll take a seat here. Yeah. The question will be later. And now, probably, we have an online speaker, Joy. Can you hear us, Joy?

Joey:
Yes, I can hear you.

Moderator:
Perfect. So Joy is one of our main co-collaborators. She’s an educator at a rural area in Uganda, in Boduda. Her school is quite remote, I would say. Through another collaborator of ours, the year we had an interaction with her initially, we explained our project to her. And we asked if we could have some sessions. So our main goal to include a school from such a different economic but also cultural background was to see if when we talk about children’s rights, this means exactly the same for all the situations. Does the economic or the cultural context play any role here? So what we did, it was to bring together the students from Tokyo, this urban area, and the students from Uganda to explore the concept of fairness. So we ran studies on storytelling. And we asked children to talk about fairness in different scenarios, everyday scenarios, technology, and robotic scenarios. And now, Joy, would you like to talk a little bit about your experience participating in our studies?

Joey:
Yeah, I’m excited. And thank you very much for inviting me for the conference. Thank you very much. I’m Joy. And I’m an educator from an Ugandan school called Bunamari Budusa Maritan, which is founded, of course, in Uganda, in the rural setting. It has a total number of like 200 students who are in the age bracket of 5 to 18 years old. Most of the students live close to the school, and their parents are generally like citizens. The greatest benefit from being involved in the project has been the exposure to my students. And the project has enabled our students to participate and have hands-on experience that enhances their understanding and interest in technology and other cultures. It was their first time for them to talk to children like in Japan and other countries. That really was a great experience for them. Additionally, a great bonus was language learning, whereby the students were able to engage in interactive practices. And they received artistic feedback on their language skills. You could find that they learned how to express themselves in Swahili and English. What we thank a lot, like the session were well-planned and would really capture our students’ attention. And it had to increase the engagement, the session that we all had during the activities we were handling. What I feel like, in my opinion, what I heard was the project really enabled the social and emotional learning, whereby the development of the social skills, the consideration of emotional intelligence, feeling the compassion for the peers in Japan. They really enjoyed and they learned about the Japanese culture and the school in all.

Moderator:
Thank you so much, Joy. And if you want to tell us a little bit about possible challenges that you faced while you were participating in our studies. And we didn’t have, of course, we didn’t have the opportunity to have a robot at the school there. So this is something that was not, I mean, we are in very initial phases where we do ethnography. So probably this will be in the future. But already we had some other interactions and discussions with Joy. So would you like to tell us a little bit the challenges that you faced, even with the technology, the simple technology that we used during our project?

Joey:
Thank you, Vicky. In my opinion, the major obstacle was the limited resources we had at the local level, both in Uganda and the school being at the local setup. Gudu Samaritan is a local setup that has a budget constraint, making it difficult to invest in technology. And also, we found that the internet connection was not all that stable, like they were used to witness with fear. And it really made the work to, you know, participating online sessions was very hard to catch up with the timing. Another issue we had was to do with the curriculum integration, whereby we feel like there should be a need to engage the minister of education back in Uganda to integrate the project so that there is additional resources, the time, the adjustment to teaching methods.

Moderator:
Thank you, Joy. And what is your vision for the future? What would you like to have for the future in the context of this project?

Joey:
Thank you. Like, the most important aspect for us is the funding of such projects. First, the government should provide the infrastructure for a stable internet connection for all. This is like a basic need for the integration of technology in the school. And you have to find that you find a school like Gudu Samaritan. There is no power. There is no internet connection. What we were only using, like, one phone, maybe one laptop, which was very hard. So in case there is that funding, it will help to ease the connection of the internet to the children. We also need to, like, the resources and the necessary materials, like the intelligence system, the robot, the computer equipment, as in the schools. Like, you find that Japan, you know, the children would feel like their adult students had computers. So this way, like, our students will have equal access to information, like how we do it in Japan. For the future, we envision, like, our schools have not only the necessary technology, such as computers and robots for the students, but also trained teachers. We feel AOL literacy is important for all students and teachers. We hope that all the educators have the opportunity to participate, like, on those online workshops and training, to feel confident about technology in their everyday teaching. Like, Vickie, as you understand, our participation in this project was a great opportunity to our students. And we hope that at least, not only at the beginning, how we started it, but we will continue with this exciting project to grow up and excel. Thank you very much.

Moderator:
Thank you, Joy. It was a great pleasure it has been to work with Joy and the school. And thank you very much for your intervention today. Great. So now we can, I don’t know if Judith is around. Judith, you’re here. Great. So I would like to invite Judith. So as Stephen said beforehand, this was one, I mean, our project is one of the eight case studies where we tried to implement some of the guidelines from UNICEF. Today we want also to take a taste from another case study. So Judith, I need to read your short bio because it’s super rich. So welcome to this session, first of all. Judith is a technology evangelist and business psychologist with experience working in Africa, Asia, and Europe. In 2016, she set up IMISI3D, a creation lab in Lagos focused on building the African ecosystem for extended reality technologies. She’s a fellow of the World Economic Forum. And she’s affiliated with the Graduate School of Harvard, School of Education. So the floor is yours. Judith.

Judith :
Thank you very much, Vicky. Good afternoon, everybody. What a pleasure.

Joey

Speech speed

174 words per minute

Speech length

791 words

Speech time

273 secs

Judith

Speech speed

143 words per minute

Speech length

14 words

Speech time

6 secs

Moderator

Speech speed

146 words per minute

Speech length

1361 words

Speech time

559 secs

Randy Gomez

Speech speed

130 words per minute

Speech length

467 words

Speech time

215 secs

Ruyuma Yasutake

Speech speed

129 words per minute

Speech length

223 words

Speech time

104 secs

Steven Boslow

Speech speed

162 words per minute

Speech length

917 words

Speech time

340 secs

Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Chris Jones

Geopolitical discussions should focus on areas of agreement rather than disagreement to foster cooperation and prevent conflicts. This approach aligns with SDG 16: Peace, Justice and Strong Institutions. Breaking down large tasks into smaller manageable ones, advocated by engineer Chris Jones, promotes effective problem-solving and resource allocation, in line with SDG 9: Industry, Innovation and Infrastructure. A positive stance towards international cooperation and addressing challenges through understanding and managing smaller components is supported, aligning with SDG 17: Partnerships for the Goals. Large organizations may need to make changes to become more agile and adapt to emerging technologies, a principle aligned with SDG 9. Governance discussions should consider both shared values and technical requirements, as highlighted by SDG 16. The process of governance is equally important as the final product, as demonstrated by the UK’s online harms legislation. Multi-stakeholder governance, involving diverse expertise and perspectives, is crucial, echoing SDG 17. The airline industry’s success in implementing common standards serves as an example of a bottom-up approach aligned with SDG 9. These approaches, emphasizing collaboration, agility, inclusive governance, and bottom-up solutions, contribute to sustainable development, peace, and justice.

Sheetal Kumar

The analysis examines the perspectives surrounding future technologies and their impact on marginalized groups, as well as the governance and development of these technologies.

One argument put forward is that future technology developments may not necessarily bring positive impacts, particularly for marginalized groups. New technologies like quantum-related developments, metaverse platforms, nanotech, and human-machine interfaces can be complex and intimidating, making it difficult for already marginalized individuals to access and benefit from them. This highlights the potential for further exacerbation of inequalities if technology is not developed and implemented in an inclusive manner.

On the other hand, there is a strong emphasis on the importance of inclusive technology development and governance. The argument asserts that the development and governance of technology should be more inclusive, particularly in relation to marginalized groups. This approach recognizes the need for diverse perspectives and experiences to be considered to avoid further marginalisation and ensure equitable access to technological advancements.

Furthermore, the analysis suggests that governments and industry stakeholders should prioritise engaging in multistakeholder discussions related to technology developments. Examples such as the IGF Best Practice Forum on Cybersecurity and the policy network on internet fragmentation are cited as instances of successful multistakeholder dialogue. This underscores the significance of collaboration and cooperation among various stakeholders to ensure that technological advancements are beneficial and meet the needs of all.

In terms of future-proofing, an important observation is that high-tech solutions are not the only way to achieve this. While future technologies are often associated with cutting-edge advancements, it is important to recognise that future-proofing can also involve other approaches that do not solely rely on high-tech solutions.

Another noteworthy perspective is the advocacy for connecting multilateral spaces through people and not solely through novel technology. The analysis highlights the need to improve and enhance existing spaces where work is being done, making them more diverse, inclusive, and connected. By prioritising diversity and inclusivity in these spaces, stakeholders can foster collaboration, coordination, and cooperation, ultimately leading to more effective and equitable outcomes.

The analysis also praises the United Nations’ Internet Governance Forum (IGF) as an open, inclusive deliberative space that plays a crucial role in discussing and shaping technology governance. It emphasises the significance of preserving and enhancing spaces like the IGF, which offer unique opportunities for stakeholders to come together, exchange ideas, and collaboratively address the challenges associated with technology governance.

Additionally, transparency, engagement, and the preservation of user autonomy are considered fundamental principles that should be upheld in technology governance. The analysis argues that good governance principles, which are already known, should be applied to new technologies. This includes timely and clear information sharing that is accessible to a wide range of individuals, ensuring transparency and meaningful engagement.

Another notable point is the integration of high-level principles, specifically the international human rights framework, in guiding the use of technologies. The analysis highlights that technologies like AI and data impact various aspects of life and suggests that the international human rights framework can be embedded throughout the technology supply chain through standards. This approach promotes a rights-respecting world where everyone benefits and ensures that the development and usage of technology uphold human rights.

In conclusion, the analysis presents various perspectives on the impact and governance of future technologies. It highlights the importance of inclusive technology development, multistakeholder engagement, connecting multilateral spaces through people, and embedding high-level principles such as the international human rights framework. By considering these perspectives and incorporating them into technology governance, it is possible to strive towards a more equitable and beneficial technological future.

Gallia Daor

Intergovernmental organisations, such as the Organisation for Economic Co-operation and Development (OECD), have demonstrated their ability to be agile while maintaining a thorough and evidence-based approach. The OECD’s AI principles were adopted in an impressive one-year time frame, making it the fastest process ever at the organisation. This highlights the organisation’s ability to adapt to the rapidly evolving landscape of emerging technologies.

To facilitate global dialogue on emerging technologies, the OECD established the Global Forum on Technology. This platform provides an avenue for stakeholders from different countries and sectors to come together and discuss the challenges and opportunities presented by these new technologies. This engagement ensures that decisions made by intergovernmental organisations are well-informed and incorporate perspectives from various stakeholders.

The importance of multi-stakeholder and interdisciplinary engagement in decision-making within intergovernmental organisations is evident through the OECD’s network of AI experts. With more than 400 experts from different stakeholder communities, the OECD is able to tap into a wide range of expertise and perspectives. This inclusivity ensures that the decisions made by the organisation are comprehensive and representative of diverse viewpoints.

Recognising the need to keep pace with emerging technologies, intergovernmental organisations like the OECD have established dedicated working groups that focus on different sectors. These working groups, such as those on compute, climate, and AI future, allow for a deeper understanding of the specific challenges and opportunities posed by each sector. By focusing on these emerging technology sectors, intergovernmental organisations can proactively address the unique issues that arise within each area.

High-level principles, such as trustworthiness, responsibility, accountability, inclusiveness, and alignment with human rights, are considered important and relevant for all technologies. Intergovernmental organisations aspire to develop technologies that are trustworthy, responsible, and inclusive, while also being aligned with human rights. It is essential to factor in potential risks to human rights and ensure accountability in the development processes of these technologies.

However, there is often a gap between these high-level principles and their actual implementation in specific technologies. Variations exist between technologies, and the importance of certain issues like data bias may be specific to AI. This calls for a careful examination and consideration of these factors during the governance processes of emerging technologies.

To address the complexity and differing requirements of different technologies, there may be a need to break up the governance processes into smaller components. By doing so, intergovernmental organisations can accommodate the varying expertise and process requirements associated with different technologies. This approach ensures that governance structures are tailored to the specific needs of each technology, promoting more effective decision-making and implementation.

In conclusion, intergovernmental organisations have shown their ability to be agile, adaptable, and evidence-based in the face of emerging technologies. The OECD’s fast adoption of AI principles and the establishment of the Global Forum on Technology exemplify their commitment to staying at the forefront of technological advancements. The inclusive and interdisciplinary approach to decision-making, along with the focus on specific technology sectors, further enhances the effectiveness of intergovernmental organisations in addressing the challenges and harnessing the opportunities presented by emerging technologies.

Carolina Aguirre

The analysis considered various perspectives on technological development and governance. The speakers emphasised the need to maintain openness in both processes, drawing parallels with the Internet Governance Forum (IGF), which has nearly 20 years of experience in dealing with open technology. They highlighted that the IGF’s bottom-up approach plays a vital role in achieving openness.

The growing influence of the private sector in shaping technological developments was recognised as an important aspect. The speakers noted that many new technological advancements are being driven and progressed by private companies. This recognition indicates the need to understand the limits and the actors shaping technology ecosystems.

There was concern that new technologies are being developed behind closed doors, deviating from the open nature of the Internet’s original development. The speakers argued that such closed development is less open by nature. This observation raises questions about transparency and inclusivity in the creation of new technologies.

The speakers universally agreed that technology is not neutral and is influenced by societal values. This recognition signals the importance of considering the ethical and social implications of technological advancements. The broader impact on society must be a critical consideration in technological development and decision-making.

The adequacy of existing institutions in the face of challenges posed by globalisation and technological development was called into question. One speaker, Carolina Aguirre, expressed scepticism about the sufficiency of the institutions currently in place. The analysis revealed a need for institutions to adapt and keep up with the rapid changes brought about by technological progress.

Furthermore, the analysis highlighted the decline of globalisation in terms of trade and international dialogue. This observation suggests that traditional processes concerning internationalisation are struggling to keep pace with technological advancements.

In conclusion, the analysis presented a multi-faceted view on technological development and governance. The speakers stressed the importance of openness, raised concerns about closed development, highlighted the influence of the private sector, and acknowledged the influence of societal values on technology. Additionally, the analysis pointed out the challenges faced by existing institutions and the decline of globalisation. These insights shed light on the need for continuous evaluation and adaptation in the realms of technology and governance.

Thomas Schneider

The analysis highlights several key points regarding disruptive technologies, global digital governance, and the regulation of artificial intelligence (AI). Firstly, it emphasizes the need for a change in approach towards disruptive technologies. As technologies continue to develop rapidly, with increasing complexity, it is important to adopt a more distant perspective to effectively regulate them. The analysis suggests that machines and algorithms can play a crucial role in developing regulations for disruptive technologies, taking into account their unique characteristics and potential impact.

In terms of governance, the analysis asserts that collaboration is a better approach than conflict. It argues that leaders have been losing sight of the notion of cooperation, which is crucial for achieving sustainable and effective global digital governance. Collaboration is believed to promote a better working environment and foster long-term solutions to complex challenges.

Moreover, the analysis delves into the regulation of AI. It argues that human beings are relatively stable over time, which necessitates the adaptation of regulations surrounding AI. The historical reactions to new technologies, including fear of job loss and ignorance of technology’s potential, are cited to highlight the need for a balanced and adaptable regulatory framework.

The analysis also highlights the importance of building a network of norms in response to advancements in AI. It emphasizes the need for different levels of harmonization depending on the context and argues that institutional arrangements should adapt to technological innovations to effectively govern AI.

Additionally, the analysis makes an interesting observation about the notion of a multi-stakeholder approach. It suggests that this concept is here to stay and proposes that with technology dematerializing, rule-making should also dematerialize. This means that decisions should be made based on stakeholder involvement rather than geographical boundaries, indicating a shift towards a more inclusive and participatory governance model.

In conclusion, the analysis brings attention to the need for a change in approach towards disruptive technologies, the importance of collaboration over conflict in global digital governance, the need to adapt regulation of AI in response to human stability, the necessity of building a network of norms to govern AI advances, and the significance of the multi-stakeholder approach in dematerializing rule-making. These insights provide valuable considerations for policymakers and organizations looking to navigate the complex landscape of disruptive technologies and governance in the digital age.

Alžběta Krausová

The convergence of technologies has become a cause for concern as it raises ethical and privacy issues. The development of human brain interfaces is particularly problematic as it intrudes on the privacy of our minds. This invasion into individuals’ innermost thoughts and feelings is seen as a major problem, raising questions about personal autonomy and the protection of mental privacy.

Additionally, there is a growing recognition of the importance of defining our future world. As technology continues to advance rapidly, it is crucial to establish clear guidelines and regulations to ensure its safe and ethical use. This includes operationalizing our current ethical principles in new and unfamiliar situations that arise with technological advancements. By applying our existing ethical frameworks to emerging technologies, we can address the ethical challenges they present and ensure they align with our values and principles.

Furthermore, it is argued that considering case-by-case scenarios is necessary when making decisions about the use of artificial intelligence (AI) and other advanced technologies. While general principles and guidelines guide our ethical considerations, it is important to take into account the specific context and circumstances surrounding each situation. This approach enables us to address the unique ethical dilemmas that may arise and make more nuanced and informed decisions.

Moreover, valuing cultural understanding and emotional connections is emphasized as a means to reduce inequalities and foster positive interpersonal relations. Recognizing the diversity of cultures and perspectives in our global society can help bridge gaps and promote empathy and understanding among individuals from different backgrounds. Striving for understanding beyond a rational level, including emotional understanding, is seen as crucial for building inclusive and harmonious societies.

In conclusion, the convergence of technologies presents complex ethical challenges that necessitate attention. Defining our future world, operationalizing our principles, considering case-by-case scenarios, and valuing cultural understanding and emotional connections are key aspects that stakeholders should address. By doing so, they can navigate the ethical landscape in a way that promotes fairness, inclusivity, and respect for individual privacy.

Cedric Sabbah

Cedric Sabbah, an expert in international governance, identifies the challenges posed by the rapid development of technology and its frequent disruption for global governance. He observes that periodically, a new technology becomes a major concern for the international community. These concerns have evolved from critical infrastructure to IoT, ransomware, and internet governance. Emerging issues, such as jurisdictions, content moderation, and encryption, have also come to the forefront.

Sabbah highlights the ever-changing nature of the global tech industry, emphasizing that international organizations cannot afford to be complacent. He suggests that an agile and bottom-up approach could assist in addressing the governance challenges posed by technology. Sabbah believes that as technology constantly evolves, policies need to be regularly revisited and updated. Incorporating domestic bottom-up principles into international governance may bring value in tackling these challenges.

Furthermore, Sabbah emphasizes the importance of future-proof and flexible global tech governance. He proposes an approach that can adapt to the changing technological landscape while maintaining long-lasting effectiveness. Sabbah also recognizes the potential of multi-stakeholder processes and bottom-up approaches in enhancing the quality of global governance mechanisms. He advocates for involving non-traditional stakeholders in discussions and encourages the development of rules by specialized networks.

However, the existence of numerous international bodies and initiatives addressing similar topics raises concerns about fragmentation within these organizations. This fragmentation includes bodies within the UN as well as external entities like ITU, UNESCO, Human Rights Council, WIPO, OECD, COE, and the EU. It prompts the question of whether fragmentation is advantageous, allowing for diverse efforts, or a disadvantage that diminishes focus and resources.

In conclusion, there is a need to reassess existing concepts and explore new approaches to effectively govern emerging technologies. Sabbah’s insights underscore the significance of an agile and bottom-up approach, as well as the potential value of multi-stakeholder processes in addressing technology governance challenges. The concern regarding possible fragmentation within international organizations calls for thorough examination and coordination of processes to ensure effective resource allocation. Overall, global governance mechanisms must adapt and evolve in response to the rapidly changing technology landscape.

Session transcript

Cedric Sabbah:
Sedgwick, Shomael? Hi. Hi. Yeah? You guys hear me? Yes, we can hear you. Awesome. Is now a good time to start? So we are about to start. Okay. I’m watching a game of musical chairs. Hi, Sedgwick. Yeah, I hope so. Hi, Sedgwick, I think you can start. Okay, awesome. Do you guys see the PowerPoint? Yes. Okay, awesome. Okay, so, hi, everyone. My name is Sedgwick Saba. I’m Director for Emerging Technologies at the Office of the Deputy Attorney General for International Law at Israel’s Ministry of Justice. I apologize for not being here in person. My colleagues and I had to cancel our flight at the last minute due to the difficult situation here in Israel. The events taking place here are very sad, and it’s difficult for me to proceed as if everything’s a-okay, because it’s not. However, I do believe that the topic today is important. And thanks to the support of the panelists and other friends, I’ll do my best to make it as interesting as possible. So let’s get straight into it. In this afternoon’s panel, we’re going to go on a kind of a sci-fi policy adventure. I’m going to ask all of you, our panelists in particular, to project yourselves in, let’s say, IGF 2030. Maybe it’s taking place on a gigantic international space station somewhere. And you’re trying to figure out how the international community should deal with this new thing that’s happening in technology, whether it might be quantum sensing, quantum computing, quantum communications, human-machine interface, immersive technologies. And we’ll ask our panelists now how they envision the international community dealing with these issues that could arise in the future. So as you all know, technology develops rapidly. We’re seeing disruptions every year. Every few years, we’re seeing things. And those of us who follow the technology, we see it happening incrementally. But there’s usually like a tipping point where the international community focuses on the next big issue and decides this is what we need to deal with, only to be replaced by another issue a few years later. So just looking back in the days of, you know, when we started with cyber, so everybody was talking about critical infrastructure, and then it was IoT, and now it’s ransomware and Internet governance. In the past, I remember having a lot of discussions about jurisdiction and then content moderation. And now we’re talking about, you know, decrypting, companies providing assistance to decrypt child sexual exploitation material. For AI, no sooner than we were talking about high-risk AI and, you know, we had in mind biometrics and discrimination, all of a sudden, generative AI becomes the thing we’re talking about. So this is the known challenge of how law and policy play catch up to technology, and maybe it can’t really ever catch up. Everything is highly dynamic. And there’s never a point at which international organizations can just say, you know, we can pack our bags now. Our work here is done. It’s always evolving. And one specific issue I’d like to explore today is whether an agile and bottom-up approach can help international institutions deal with these challenges. I’m thrilled to introduce to you an absolutely all-star cast. So we have online Carolina Aguirre, a professor at the Universidad Católica del Uruguay in the Department of Humanities and also a former member of the UNESCO Expert Working Group on AI. We have Galia Daur, a policy analyst at the OECD who coordinates the activities of CDEP. We have Sheetal Kumar, head of the Engagement and Advocacy at Global Partners Digital. Dr. Osveta Krasova online, who’s head of the Center for Innovation and Cyber Law Research at the Institute of State and Law in the Czech Academy of Sciences. And Chris Jones, Director of Technology and Analysis Director at the UK Foreign Commonwealth and Development Office. And of course, Ambassador Schneider, Thomas Schneider, who’s Ambassador and Director of International Affairs at the Swiss Federal Office of Communications in the Federal Department of the Environment, Transport, Energy and Communications. And to me, he’s Chairperson extraordinaire at CHI in the Council of Europe. So the structure of this session will be as follows. We’ll divide it into three parts. I’ll try to finish talking soon so we can give the floor to the panellists. First, we’ll talk about the challenges of international governance that are presented by the next wave of disruptive technologies and maybe looking at the past of AI and Internet governance to see what we can learn. Then we’ll explore whether principles of agile governance, and in particular, bottom-up principles that we know from domestic policy, can be sort of internationalized and harnessed to deal with global tech governance. And lastly, we’ll try to identify some common principles that can be long-lasting and future-proof to enable a certain degree of institutional agility without losing sight of the important things. For each of these topics, I’ll ask one or two panellists to share their thoughts, and then the other panellists can chime in. And then Elzebeta, towards the end, will provide some concluding remarks and observations, and hopefully we’ll have some time for Q&A. One disclaimer, what I’m going to say is my own personal views, not necessarily the views of the government of Israel. Now, before we start, just a second. Before we start, and just to change things up a little bit, the panel includes a challenge for you, the audience, in person and online, and also for the panellists. So I’ve asked the panellists to pick a few songs and artists that they like. You see them on the right, and the names of the panellists are on the left. And I also picked a song, and I selected from these the songs that connect with our panel today, and also I used Bing’s image creator to generate some really nice images that are inspired by the song titles. The challenge for all of you is to try and guess who picked which song, and all the speakers, including me, will be including a small clue in the presentation to help you figure it out. And you can give your answers to me in the Zoom chat or to any one of the panellists. I was planning on giving the winner some kind of small prize, but obviously I can’t right now, so I’ll try to keep that as a rain check for next year’s IGF or some other way. So now that all these explanations are out of the way, let’s get right into it. So let’s start talking about the challenge. So I’ll address first Tomas and Karolina. So the challenge for international organisations. So the first question is what lessons can we learn from our experience with Internet governance and AI governance in order to address the next wave of disruptive technologies? Specifically, what do you think should be the role of international bodies in addressing global digital governance challenge? I’ll paraphrase something that I heard a few days ago from my friend David Fairchild in another session. Many of the international bodies right now are, you could say, analogue bodies, asking them to deal with problems of a digital world. And also, if you can briefly address what I think is an elephant in the room, which is geopolitics that have a major role to play in shaping the debates. For example, ITU discussions on Internet governance, difficulties in making progress in the UN ad hoc committee on cybercrime. So can we really have a meaningful discussion on desirable and implementable global policy goals in light of geopolitics? So we’ll start with Tomas and then Karolina.

Thomas Schneider:
Okay, sometimes it helps to turn devices on. It’s a pity that you’re not here, but of course we do understand this. But I hope to see you again soon in Strasbourg, actually. Yeah, I think it’s a nice setting because it tries to be a little bit more forward-looking than other sessions. And hopefully a little bit, let’s say, also inspiring in a different way. Well, the challenges are, let’s say, substance-based and then there are geopolitical challenges. And this doesn’t go just for intergovernmental organizations. It actually goes for all those that are somehow dealing with policy and with rulemaking. Maybe I have to start with this is a crucial moment in history and things will be completely different tomorrow than they have been yesterday because this is what you hear throughout history ever since. Speeches are recorded. Every person thinks that that particular moment in time is the moment where everything will change. And it’s true. Everything changes every day, but it’s also there’s recurring patterns in human behavior, not just in physics, but also in human behavior. So, to cut the long story short, I think, but nevertheless, we have an extremely fast development of technologies, of growing complexity, of being less material, which has effects compared to technologies that used to be material-based because you couldn’t copy them so quickly. You couldn’t move them so quickly. You cannot apply them remotely. You cannot use a car remotely while being in another continent, for instance, and so on and so forth. So, there are many similarities with previous disruptive technologies in the way that humans reacted to it, in the way they were regulated. The disruptiveness of the new technologies, I think, are of a different nature that has implications. And it forces us as rulemakers or us as society to adapt. But I’m not sure whether we have to adapt in a sense that we have also learned to think quicker and calculate quicker in our brains. That may be difficult. So, we have to actually probably change the way at which we look at things. We may have to look at things a little bit more, again, like maybe with the Greeks and the Romans, from a little bit more of a distance and say, okay, what are the big developments? And trying to understand them. And then maybe use machines and use algorithms to develop regulation and develop concepts to cope with algorithms because our brains may not be able to compute the nitty-gritty details also with regulation for this. And, for instance, to give you an example, we have parliamentarians now in Switzerland that use ChatGPT to formulate parliamentarian interventions and requests. And we are not yet allowed, but we are waiting for the moment where we decide because it takes resources to answer these requests. And the more we get, the more resources we need. And an efficiency gain for us would be if we could also write the reports that are supposed to reply to the parliamentarian interventions with ChatGPT. So, in the end, you have two machines talking to each other and we can both go on holidays, the parliamentarians and the administration. I think that’s something to think about in the end. But now, to be serious, we need to find ways to become more agile, more dynamic, without becoming stressed. So, we are going in the wrong way if we try to do things quicker. We have to do things differently as human beings in general, but also as rule makers. So, we need to use the new tools to face the challenges that the new tools create. Otherwise, I think it won’t work. Don’t ask me how. I’m not a technician. Maybe Vint and others know. But at least on the concept level, I think we need to find a different approach. And just two words to the geopolitical environment. And this is something that, as somebody who has been in this since the WSIS, since 2003, in that period, we were all still in the hope of the end of history with the fall of the Berlin Wall, with Nelson Mandela, with people with charisma, avoiding wars, creating peace, bringing people together. And we were hoping that the new technologies would bring us together, would strengthen the rules-based international order based on shared values. Unfortunately, we somehow have lost the track. And in particular, the leaders, be it dictators or be it leaders that have been elected by more or less democratic processes, are losing track of this notion of cooperating is better than fighting against each other. And I just hope, I’m also a historian, that we don’t need to go to really ugly wars in order to realize that cooperation is better than fighting each other. But for the time being, it seems a little at least unsure how we deal with this. And then, of course, technologies are not just new tools to do good things, but also to do bad things. And I’m not a prophet, so I will not go into detail. But I think we should realize and we should work together with people that realize that working together is actually sustainable. It’s also more fun. It doesn’t just create less harm. It’s actually also more fun than working against each other. Because if that’s not the case, no intergovernmental institution or multistakeholder institution works because it’s all built on a notion of we cooperate together. So you can’t blame the ITU or the UN for not producing results if those that are shaping it, i.e. the member states or the stakeholders in multistakeholder institutions, are not willing to cooperate. So this is just a few thoughts of mine. Thank you.

Cedric Sabbah:
Carolina, you’re up next. Can you hear us?

Carolina Aguirre:
Yes, thank you. So to address these questions and following on Thomas’s intervention, so I do think that we have nearly 20 years of experience on our back with dealing with an open technology as the Internet and then with AI governance as an emerging challenge, global challenge, but that also is spread out very much everywhere. I do think that we still need to make strong efforts in keeping up the momentum on spaces and processes that achieve some kind of, in a way, what the IGF does in terms of its openness and bottom-up spaces. And we are seeing that kind of reflection around some of the AI governance developments, which look positively at spaces such as the IGF and some of the Internet governance approaches that have been taken over the last nearly two decades. We do need to sort of try to understand the limits and the actors that are shaping these ecosystems. So in that respect, I do believe that keeping up this effort despite maybe the less positive and maybe less vibrant sometimes mood that we may have towards these processes is very, very relevant in line with what Thomas was mentioning concerning cooperation as well, with trying to get to some kind of mutual understanding. I also think that trying to get to the idea of working together is also related with the third part of this intervention, the question, the prompt that you raised, Cedric, concerning the geopolitics, because we are in a different time and moment concerning globalization. So geopolitics today is unfolding as it did unfold differently in the early 2000s or late 90s. And now those states are certainly extremely important. I mean, so many of these new technological developments as in the past, they are also being shaped and taken forward by the private sector. And so when we talk about geopolitics and address technological changes and technological momentum, I mean, we do have to also address the elephant in the room on how to sort of work and define the scope and space for action for this private sector that has an increased power. And we are seeing that kind of momentum also shaping how we address and have concerns on how some of these new technologies are being sort of developed behind closed walls and are much less open by nature in terms of what the Internet originally was and still is. And finally, as a final observation, I mean, when we think about the developments of these technologies, including the Internet, I mean, technology is never neutral. Technology is never non-reliant on societal values. So we do have to keep that in mind when thinking about developing international processes around these new technologies. Thank you.

Cedric Sabbah:
OK, thanks, Karolina. I want to give a bit of the opportunity to other panelists to just chime in. It almost seems like hearing from both of you, Tomas and Karolina, I’m grossly oversimplifying, but it’s almost like you’re saying we’re OK. We the institutions we have are in place. The world is what it is. And we’ll just have to deal. Karolina, you’re not agreeing. So I misunderstood. Could you could you just refine what I’m saying?

Carolina Aguirre:
I’m certainly not saying that we are OK. I do think that we do have some interesting foundations, but that the challenges ahead are enormous and particularly because we are not as keen as Tomas, I think, as I understood him, correct me if I’m wrong, was stating that we are in a different moment in terms of how we address global cooperation as one of the angles to address globalization. Globalization is in decline in many respects concerning trade, concerning international dialogue. So, I do think that it is indeed an extremely challenging moment and maybe probably most of the processes that we are seeing concerning internationalization are really not up to the challenges that we face with the development of these technologies.

Cedric Sabbah:
OK. I’d like to give a few moments for Chris or anyone in the room if you want to relate to what you just heard. I think that’s a prompt, isn’t it, Cedric?

Chris Jones:
I think you want me to say something. So, first of all, Cedric. Not specifically you. So, I will. I’m delighted to be here. And, Cedric, I’m sorry you can’t be with us here personally, but I’m really happy to see you safe, albeit on a screen. So, you know, best of luck with everything that’s going on. I agree with what both of my co-panelists have just said. Geopolitics is a messy business, particularly right now. But I think there’s an opportunity here to focus on the areas where we agree, not on the areas where we disagree. And too often, and I’m sort of stealing my remarks from later, too often I feel we start with too big a picture. So, we try to do too much in one go. I’m an engineer, and my natural tendency is to break things into the smallest possible component I can because I’ve got a very small brain. And that means I can understand them. I can fix them. I can make them work. And I think there’s some parallels here for how we work in our multilateral and international organizations in helping address some of these challenges.

Cedric Sabbah:
Okay. I think there’s a lot to unpack in everything. But we’ll have the opportunity to continue to delve in. So, I’d like to go now in a little bit of the uncharted territory. We heard in a few panels in the last few days the idea of agile governance and sandboxes in domestic regulation in order to smartly regulate AI. And what I want to ask is whether this idea can be useful for global governance as well. Are international organizations capable of being agile? Or is this concept maybe completely antithetical to the way they’re meant to operate? When we talk about bottom-up regulation, the underlying idea generally is that rather than top-down where you have like a central institution that promotes and implements processes for its constituents, in bottom-up we empower the constituents to deal with the issues based on their concrete needs from the ground. We see the good in everyone’s contribution. So, can bottom-up and multi-stakeholder processes contribute to the quality of global governance mechanisms? And if so, how? Practical examples of bottom-up approaches to consider and I invite you to address any one of these or all of these or maybe something else. One example that’s already done to a large extent by the OECD is fostering policy experimentation by allowing exchanges of views. So, setting up a tech policy lab for international information sharing. Another one is actually fostering the experimentation by states by allowing for a space in which states can maybe succeed and fail in certain examples and then learning collectively from the successes and failures. Another one is maybe integrating in the bottom-up approach, integrating other stakeholders that maybe are not traditionally in the conversation. One example that comes to mind from our experience with AI in Israel is small and medium enterprises. And also maybe encouraging rule-making by specialized networks. So, instead of having, for example, the large generalist organizations that deal with the big issues, having networks of, for example, privacy regulators or cybersecurity regulators or AI regulators in the future to deal with things on their own. So, I’ll ask Galia and Chris, I’m turning to you as well again. I think each of you have unique viewpoints that you can share, so I’ll ask you to go first.

Chris Jones:
Yeah, thank you. So, I’ll go first just because I’ve been asked to. So, first of all, I’m interested in these songs and I really hope people in the audience are doing better than I am, because I have no clue. But when Cedric first suggested it, I thought Ambassador Schneider was actually going to play them all, which would be amazing. Look, I think it’s a little bit of a loaded question being at an event organized by a large international organization about whether they can be agile, because I think that could be quite a dangerous place to go. But I do think they can. I do think large organizations can be agile, but not in the way that we’re currently organized and the way that we operate. So, I think there are some parallels we can take from agile software development, where we define small chunks of activity and we work out how we define those. We don’t define the order in which we deliver them, we just define what they are. And the plan is always to get better. So, to incrementally deliver more, rather than trying to deliver everything in one go. And I think there’s a parallel there for how we work internationally. That’s what we’re trying to do with the UK’s hosted AI Safety Summit. So, we can’t do all of AI. So, on the 3rd of November, AI is not going to be solved. But what we can do is focus on a very narrow slice and get some broad international agreement. And I think there’s something we can do there. The second thing I wanted to talk about was different types of governance. And I think we always tend to focus on values first. We try to agree what are the values we want to see. And this, I think, comes to the geopolitics. I don’t think we will ever agree on a common set of values. Different countries are different countries for a reason. We have different national identities, we have different things that are important to us. And we have to embrace that diversity. But that doesn’t mean there aren’t some common values we can agree. So, I think we absolutely should focus on that. But there’s another type of governance, the technical governance. So, the things that we need to have in place in order to be able to interoperate, to talk, to work together. And I think it’s often easier to focus on those because we can get the engineers to be focusing on the really practical details of what does it take. I think there’s a difference between how and what. And I think very often we focus on the what, whereas what’s really important is the how. And I’ll give you the example of the UK’s online harms legislation. So, that has taken us six years and we’re nearly there. But even when we get there, you could never pick that legislation up and give it to another country, just wouldn’t work. But what would work is the process of how we got there. So, there’s some key things that you need to do to be able to develop that type of legislation. You need to define what constitutes a vulnerable group. There’ll be some common themes. So, children, I think everybody agrees that children are a vulnerable group. But different minority groups will be different in different countries. So, sharing the process, the how, I think is important for bringing these things together. Cedric, you talked about multi-stakeholderism. I think that is critical. I think all governance needs to be multi-stakeholder because nobody has all the answers. So, governments certainly don’t have the technical expertise. Technology companies don’t have the legislative expertise. And none of those really understands the impact on citizens and the civil societies organizations have. I think the IGF is a great example of how you bring that multi-stakeholder organization together. I mean, look at the range of organizations here. So, whether you’re the boss of a telecoms company or whether you’re a Ministry of Foreign Affairs official like me, they couldn’t be more different. But we’re all here talking about common issues. And then finally, I just wanted to talk about, Cedric, you wanted an example of bottom-up and where this has worked. And I really like the example of the airline industry where there’s a need to work together and agree common standards. You know, we needed to fly planes from one country to another. So, we needed a way to share data, a way to build planes that could fly into different territory. And that really forced people from a bottom-up perspective to work together. And I wonder what the parallel might be for artificial intelligence or quantum or dare I say, human rights. So, thank you. I’ll hand over to my colleagues.

Gallia Daor:
Thank you, Cedric. I don’t know if you wanna respond to that first or… No, go ahead, Gaia. So, thanks for this. I do love how all your examples are. I’m an engineer. So, I like to break things into little bits. So, I’m a lawyer. So, I like process. So, I think, but I think that that is part of the, will be really part of my answer because I think, yes, it’s very common to think that intergovernmental organizations can’t do that, that, you know, what’s agility got to do with like anything like intergovernmental organizations. But I think partly it’s by design because if we want to be, and I’m speaking from the perspective of an intergovernmental organization, if we want to be accountable to our members, if we want to be transparent, if we want to have multi-stakeholder consultations, if we want to be evidence-based, if we want to be thorough, it’s hard to also be fast. So, we, and if we want to maintain the credibility, that would mean that stakeholders actually want to come and engage with us because stakeholders have limited capacity and limited time. And they would only come if the conversation’s worth it. Then I think we also need to make sure that we uphold these standards. Nonetheless, the world is changing and things are happening. And in particular in the technology area, things are happening very fast. So, we can’t just stick to the way that we did things 60 years ago, for example, when the OECD was established. And I think, you know, Cedric, you mentioned sort of playing catch up with technology or sort of trying to be more anticipatory and sort of more planning ahead. And I think we’re moving there. And I think I can give sort of a couple of examples from the OECD’s perspective of, I think where we’ve tried to both be agile and to have this multi-stakeholder bottom-up approach. And so, one example that you mentioned briefly earlier is the OECD AI principles that were adopted in 2019 and were the first intergovernmental standard on AI. So, I think that’s one thing to say about that is it was the fastest process ever at the OECD to develop a recommendation. So, we did that basically in one year, which sounds like a lot, but really isn’t for something that’s so complex. And obviously, it builds on a lot of work that had been done before, but the process itself was remarkably fast and nonetheless was also absolutely multi-stakeholder and interdisciplinary. And I don’t think we would have gotten there. I’m sure we would not have gotten there without that kind of engagement that was essential. Also on the AI front, then we are now as part of the work to support countries and organizations in implementing these principles. So, we have a very extensive network of AI experts with more than 400 experts from different stakeholder communities from different countries. And that actually helps us. It sounds like it’s a big machinery, but it actually helps us move fast. And I think it’s a really helpful model because we can, like Chris said, we can break it up to little bits and little working groups that sort of focus on different aspects. And we can also adjust. So, we started with one set of working groups, but we’ve evolved them. So, we now have a group that focuses on compute, which isn’t something that we didn’t work on at first. We have a group that focuses on climate. We have a group that focuses on AI future, which is sort of a generative plus, plus what we might see coming ahead. So, I think that’s sort of perhaps one example. And then beyond AI, AI has taken up a lot of space in the discussions that I’ve been in over the last couple of days. So, beyond AI, sort of looking at emerging technologies and also looking at my colleague, Elizabeth here. We’ve, at the OECD, we created the Global Forum on Technology about a year ago with a lot of support from the UK, but really as a global venue for dialogue on emerging technologies and sort of anticipating and preparing for the opportunities and challenges that they might bring. And I was looking at Elizabeth because she’s actually leading this project. And it’s both sort of multi-stakeholder by design, but it also lets us sort of try to move relatively quickly on these different technologies. For example, quantum, for example, immersive technology. So, I don’t know if that’s not to say everything is perfect to your question, but I think there are ways to try to address some of these by design challenges and how international organizations are built.

Cedric Sabbah:
Okay, thanks, Gana. Here too, I’d like to invite maybe Sheetal, who hasn’t spoken yet, as well as Thomas, Kaolina, Asbetta, whoever would like to just add in their two cents on this agility question. Can you? Yes, okay, great.

Sheetal Kumar:
Thank you first for having me here. Let me start with looking at the, I was looking at the session description and all of the technologies that are listed there. The emergence of new tech, like quantum-related developments, metaverse platforms, nanotech, humans, machine interface. And it all sounds like going to a theme park and maybe having a great time. But actually, for a lot of people, this future could be a very difficult one. People who are already marginalized women, it’s not necessarily going to be a good future just because the technology is different or faster or more complex. So, as I think Kaolina was saying, technology is never neutral. And so, what we can do about that is ensure that the development of it and indeed the governance of it is more inclusive. So, we can’t predict the future. I don’t think any of us would claim to do that. But what I think I can say with some certainty is there’s going to be 24 hours in every day in the future unless something changes. So, that’s really a point around resourcing, right? So, if we have 24 hours a day, we sleep for about eight hours ideally. The rest of the time, what do we do with it? We’ll work, try and shape this world that we’re in. And what I would say is that there are spaces already where we’re doing that work and they can be improved. As I think Chris was saying, we can work with what we have and make things better incrementally. What does that mean for multi-stakeholder spaces where these discussions are happening? I think improving those, making them more open, where standards are being developed, making those more diverse, strengthening the IGF, for example, and connecting the discussions that happen here with the discussions that happen elsewhere in multilateral spaces. So, to give an example from the IGF, because we’re here, and I presume we all care about the IGF, that’s why we’re here, and I’ve been involved in some of the intersessional discussions at the IGF. And what I think is a good example is, for example, the Best Practice Forum on Cybersecurity. It’s an okay example, I’m actually going to say, because I think it could have been better. We are having discussions at the, well, the UN is having discussions about how to ensure that states behave responsibly in cyberspace. They’ve developed norms, they are continuing these discussions. How to implement them has been an ongoing one, and so the Best Practice Forum over the last few years has been taking the norms and analyzing cyber incidents, big, large cyber incidents that we’re all familiar with, and assessing how those have impacted people, like first responders and the people on the ground to inform the implementation of that. These are multi-stakeholder working groups or intersessionals, and we have had governments and others involved, particularly with the policy network on internet fragmentation, actually. It would be great, I think, if governments and industry and other stakeholders and civil society prioritized having, in their portfolios, time to engage with these forums and to bring their foreback, because we have to connect these spaces through people. We don’t have to connect them with some novel technology with what they’re doing elsewhere, and that way we can strengthen and empower, I think, our spaces to be more diverse and more inclusive. That also goes to opening up multilateral spaces, more through consultations, through engagement, and through modalities that really allow for meaningful inclusion. So final point, then, I guess, is that future-proofing doesn’t have to be high-tech. It can actually be quite basic. It can actually be quite simple. Of course, I’m not saying not using generative AI to help you with your reports from us wouldn’t be a good idea, but it doesn’t always have to be that way, and I think we have some basic things that we haven’t done that we need to do better, and those are some examples which I hope help. Thank you.

Cedric Sabbah:
Does anybody else want to say something about this concept of agility? I’m not seeing anyone. So, okay, we’ll try to package everything a little bit later. So now, oh, before I move to the next slide, I was pointed out to the person who chose the song from Rage Against the Machine. I won’t disclose who it is. I made a mistake in the title of the song. The song is called Take the Power Back, so I’ll have to change the image later, but anyway, so keep that in mind. We’re moving now to the next, I guess, the final theme for today. So I think it makes sense to say, and you’ve all kind of hinted at these concepts before, I think, Sheetal and also Galia, all of you who’ve spoken about multistakeholderism, sorry. So I think it makes sense to say that agile governance, if it’s this kind of theme that we’re trying to enshrine in the way international organizations work, it doesn’t operate in an absolute vacuum. So there should be, I guess you could say, maybe like a subtle line between agility and maybe anarchy between experimentation and free-for-all. So the question I think that begs to be asked is, are there any kind of like universal principles of global tech governance that should be kind of promoted across the board? Here in the image, I connected the song Born to Run by Bruce Springsteen because it includes the line, I want to guard your dreams and visions, which I think is a nice metaphor for the idea of a responsible innovation. So we have all these common buzzwords that have served us well, I think, so far in internet governance and AI governance. So buzzwords like multistakeholderism, interoperability, human rights that apply offline, apply online, trustworthy, human-centric. So do you think these concepts remain relevant for all other technologies, such as immersive technology, human-machine interface, all the quantums? Or do you think maybe they all apply, but they apply differently? Or do you think we might need to… up with new concepts and frameworks that enable us to grapple with the new challenges. Also, a lot of the issues are cross-cutting, so when we talk about, you know, we don’t want fragmentation, but we actually see a fragmentation of processes within the UN. There’s the ITU, UNESCO, the Human Rights Council, WIPO, and then outside of the UN we have, you know, the OECD, COE, the EU, of course, which is a major player, and then we have topic-specific initiatives like GPA, like the AI Safety Summit that Chris mentioned earlier. So is this fragmentation of efforts, is this, in your opinion, a feature or a bug? So I’d like to ask Sheetal first to address this question. Any universal principles? Should we be aiming for fragmentation, allowing for fragmentation? What do you think?

Sheetal Kumar:
Thank you for those questions. I think there’s something semantic sometimes when we talk about this topic, and fragmentation, if it’s diversity, then great. If it’s also, for example, normative efforts that are all aligning and reinforcing common principles, then great. If it’s duplication, and as I said, we have limited resources if we’re going to different places trying to do the same thing, but spending our time actually developing different frameworks that are competing, then no, it’s not. And there is a risk of that if we don’t coordinate and collaborate on some of these emerging issues. There is a lot, as I think we heard earlier, around AI at the moment happening on how to govern that, but at least we have, and this is, I know, something that people have felt fatigued about at this IGF, at least we have a space where we’re coming together and we are hearing about what everyone else is doing. We can try and make those connections and ensure these deliberative spaces, ensure the decision-making spaces are inclusive. So not necessarily, I guess is my answer to you, Cedric, it’s not necessarily a bad thing to have various processes at play as long as they coordinate and they’re inclusive. And I also just wanted to point out earlier what I mentioned about the importance of connecting what is an open and inclusive deliberative space like the UN’s IGF, which is so unique because we also need to remember that the IGF is not this annual event, it is the intersessionals, it is the hundreds of national and regional IGFs that happen every year and that provide these spaces for people to come together and very unique in that way. This is something that we need to preserve and so if we try and create something else that is exactly like that, then that is a problem. But the leadership panel, which I know we have a member of here, it is very important to create these connections with those who can then take on messages and connect to other spaces. So I think that’s what’s really important that we need to ensure that when we’re governing these new technologies and building the processes for them that they’re truly inclusive by design. We have endless tools and ways to do that, we know how to do it, we need to do it. And I would say that, as I said, it’s kind of old governance or old tech for new tech perhaps. It’s not that complex to ensure that information is shared in a timely manner, that information is clear, that it can be accessed by a range of different people, that they’re invited to the table. And of course new technologies can also be deployed to support that. So hopefully we can turn our minds, I think, to actually operationalizing what we already have and use good examples as those we’ve heard from before to ensure that when we’re confronted with these new challenges, the principles, you asked me about principles, the principles of transparency and engagement, of openness, of maintaining user and people’s autonomy, and of preserving openness, all of those are enshrined and preserved as we face the new challenges that we are.

Cedric Sabbah:
Maybe I’ll, does somebody else want to take the mic?

Gallia Daor:
Hang on, yeah, sorry. Yeah, no, I was just sort of as Sheetal was speaking and also sort of to your questions, I think one of the things that also at the OECD, but I’m sure in other places that we’ve been thinking about is really this sort of the gap between what is like the fairly high-level principles. You asked, Cedric, do we think that trustworthy and responsible and whatever are relevant? So I think yes, I mean, absolutely. And I think they are relevant to, I don’t know about all technologies in the world, but I think in principle, yes. We want them to be trustworthy. We want them to be responsible or the development to be responsible. We want accountability. We want that the process will be inclusive. And I think obviously we want sort of alignment with human rights where there’s the potential of risk of human rights, to human rights. And I think that’s also to Chris’s point earlier. I think that is the, these are the core values that I think we have to, like we already did, so we have to agree on. And so I think yes, at the high level, but then the question is, okay, what do you do with that? And that’s where I think sometimes there would be differences between technologies because, I don’t know, we had an AI discussion earlier and one of the points that was raised really is about the data and how important the data is in the context of AI and how issues of sort of data that’s not representative and bias. And so these are things that are perhaps specific to AI, might not be the case with a different technology. But so we need to be aware that when you implement the high-level principles to a specific technology, that’s where you’d have the difference. And I think that’s related to the governance question because I think that’s where perhaps you would split or you’d break up things into little bits because that’s where you really need the expertise and that’s where you might need to have processes happening in different places. So just the thought, I don’t know.

Sheetal Kumar:
Could I just add something very quickly on that? It’s, I think, yeah, exactly what was said about the need to integrate, I think, these high-level principles in various ways. We are now seeing that all these technologies that we’re using are impacting so many aspects of our lives in a way that makes, I think, it requires us to turn to what we have agreed on. And what we have agreed on is the international human rights framework. So that is a ready-made, a ready-agreed framework that we can embed throughout the supply chain of these technologies through the standards. And there are means and there are tools to do that. And so I think that’s also very important. Sorry to plug my session tomorrow, but the OHCHR is co-hosting with us a session tomorrow on their report on technical standard setting and human rights. So it is really, it’s an opportunity, I think, as these technologies evolve to ensure that we build them so that we have a rights-respecting world where everyone benefits from them. And in that sense, it’s quite, it is an exciting theme park then, I think. If I may hook in, Cedric, this is Thomas.

Thomas Schneider:
Something that always strikes me is when you talk about how does this need to evolve, is that while technologies involving probably institutions will somehow follow, human beings themselves are fairly stable over a longer period in time in the way they function. And if you take, and I’m often comparing engines as something that has differences, but it has many similarities in the way it’s disruptive, like AI. Engines were put in machines that were either moving something from A to B much faster than men or horses or whatever, or cows, or they were put in machines that were producing something, be it food, be it goods, whatever. And it’s similar to AI that are used to either generate content or put content new together, or to replace not physical human labor, but cognitive human labor. There’s less animals that you can replace because animals seem to have less cognitive capabilities, but so it’s manpower, cognitive manpower. And if you look at the reactions, this is the point I’m trying to make. If you look at the reactions of people to engines being used in different contexts. In Switzerland, near Zurich where I live, in 1833, a group of home weavers and small and medium enterprise weavers were burning down a textile factory after the government has decided not to ban factories like this to emerge, which is what they demanded. They just burned it down because they were afraid of losing their jobs. And actually some of them lost their jobs. Of course, then history has shown there’s actually more new jobs are created through industrialization than jobs are killed. So the fear of losing the job is something that we’ve had. Then ignorance is another thing. The last German, Kaiser Wilhelm II, he used to say somewhere in the early 20th century, I don’t believe in the automobile that has no future. I trust in the horse. Well, so and there are people that say to say, well, this is not really going to change much and so on. Everything will stay the same. Not necessarily. And then the other one is, again, those that banned things in Graubünden, which is the region in Switzerland that has touristic places like the Warth and St. Moritz and others. The government banned cars from the whole territory of the region in 1900. And only 25 years later, 25 years later in 1925, they allowed cars through a popular vote because the people thought like, well, actually we want to use them. And then the question is where the people or this was the government in Graubünden more environmentally friendly or whatever? Probably not. Maybe that the horse tourism industry, whatever there was, was just better organized in that region that made them ban cars for 25 years. So we have the same reactions to new technologies and we’ll probably have the same reactions in building a network of norms, be technical, legal, but also cultural norms in how to use not engines, but AI in different contexts with different levels of harmonization. In the airline business, you have a much higher harmonization than in the car infrastructure, in rules on cars, but you do have technical and legal and also cultural ways of dealing with ways of organizing stuff. And the same will happen with AI. And then the same needs to happen also with the institutional arrangements on how to take these decisions. And Wolfgang Kleinrecht and others have already used the frame 10 years ago, like we are trying to solve the problems of the 21st century with the institutional arrangements that we’ve made in the 19th century, which actually many countries coincide with industrial revolution, that you had kings and kaisers and not really democratic systems only. And then more or less in line with industrial revolution, you had the introduction of parliaments, of division of power with legislative, executive, and the court system. So also there, there is some influence on technology, not just on the daily lives, but also on the institutional setting. And the notion of multi-stakeholder, I don’t think it will go away because we will have to organize ourselves differently now that the technology is dematerializing. Maybe the rules making should also dematerialize from purely physical, I live in this country now, so the rule is made in this country for this country. Because if also people move around and if everything moves around, the physical fixing of rules just because you happen to be somewhere, or even worse, you happen to be born somewhere and have that citizenship, and you can only decide about the rules where you’ve been born and not where you actually live, may not make so much sense. So we may have to develop a new way of division of power, not among geographical political borders, but maybe in a more sophisticated stakeholder-based or situation-based or whatever, voluntary group-based schemes that are more representative of the people than classical 19th century parliaments. Thank you.

Cedric Sabbah:
Thank you so much, Tomas. It’s amazing to me how sometimes to think about the future, it’s helpful to look at history. So I would like to now turn to my dear friend Osbeth Izum to try and package this for us. We don’t have a lot of time left, so I think we’ll skip the Q&A. So, Osbeth, you’ve been attentively listening to our panelists. I know you’ve been involved with human-machine interface in the past and now, of course, AI. Can you share with us, in your view, some takeaways, some overarching thoughts, action items, areas for future research that you think we should be focusing on? Over to you.

Alžběta Krausová:
Thank you, Cedric. And thank you for organizing the panel despite the situation. And my heart goes to Israel. So let me now share quickly, because we don’t have much time, my observations. I made thorough notes. And I have to say that, actually, all the panelists went so nicely to follow up on each other. So the key messages I will try to summarize from each of them. From Tomasz, the disruption is too big now. And we need to change the way that we look at things, which resonates with me very much. And I will tell at the end of my speech why. Karolina, she said that we really need to define the scope of action right now. And also that the private sector is increasing power, which we need to focus on. Chris focused on finding the common values and sharing the how, which I think all of those are very nice action points. And Gania, actually, she spoke about the importance of multidisciplinarity and involving stakeholders. Sheetal spoke about diversity and also about the importance of space where we are coming together and operationalizing what we already have. I think those are nice action points that are actually coming together very nicely. And they respond to the questions you had about disruption, agility, and common principles. Now, to my personal observations, I think that the convergence of technologies that we are facing now, that’s what Tomasz mentioned in the beginning, is the biggest problem. That’s something that we really need to focus on. And in my personal opinion, we are kind of crossing the border because with the technologies like human brain, brain computer interfaces, when we are able to peek inside of a human brain, we really are able to cross the physical border of a human body and intrude the privacy of our minds, connected with AI, read the mind, and actually even influence the people. We really need to ask the main question now, which is, what world do we want to live in? Because that is crucial. We need to define where we are headed. And it’s the place where international organizations to steer the development, to steer it in a way that is thoroughly discussed. And yes, there is this cost of time where we really need to focus on, we really need to go deep, and we need to give it the time and attention and the thought to see where we want to go. We need to agree on how we are going to operationalize the principles that we are already having. I think that the principles, they need to be implemented to new situations, as it was already mentioned. We do have the common values like human life and physical and mental integrity. This is something we need to consider in new ways and see what does it mean in different scenarios. That’s why also the bottom-up approach is very important, because we need to see case by case what is happening and not just think about theoretically what might happen. We need to see what is happening and react as quickly as possible while balancing it with a thorough discussion. And as you said that we should suggest some parts of the song in our final speech, I would like to say that I feel like walking the world, which means for me we should get to know each other better and better and better, and not understand each other just in a rational way, but also in an emotional way, which means we should not meet at one place, we should travel, we should see each other, and we should understand each other on the human level, the complete package. Maybe this is just too much general of an observation, but this is my position. Thank you.

Cedric Sabbah:
So much, Elzbeth. This session for me, I wasn’t thinking of this, but it kind of occurred to me as we were going along. I love the idea of kind of like taking something, breaking it up, like deconstructing and then reconstructing. And I think there was this kind of this recurring concept here of like, yeah, a lot of these principles and concepts, we want them, but we might have to rethink how we do certain things. That’s not to say, you know, an absolute revolution is necessary, but kind of just like recalibrating so that we can adapt better for the future. So thanks so much, Elzbeth. I know the time is up, and I would just love this session to continue for another few hours and just hear what everybody has to say. But unfortunately, we have to stop now. I think I speak for everyone here in saying we learned a lot. I want to thank the panelists, especially Carolina, who’s, I think it’s quite early in the morning for you. Your interventions, I think they provide the foundations for some kind of follow-up, so maybe next year’s IGF or something else. Last thing, if you were attentive and you think you can guess who picked which song, let us know. Thanks, everybody in the audience in Kyoto and also virtually on the Zoom and on YouTube, and enjoy the last day of the IGF. Thanks so much. Thank you, Cedric, and all the best. Thank you, everyone. Thank you. Thank you.

Alžběta Krausová

Speech speed

149 words per minute

Speech length

681 words

Speech time

275 secs

Carolina Aguirre

Speech speed

120 words per minute

Speech length

625 words

Speech time

312 secs

Cedric Sabbah

Speech speed

163 words per minute

Speech length

2936 words

Speech time

1081 secs

Chris Jones

Speech speed

209 words per minute

Speech length

1148 words

Speech time

330 secs

Gallia Daor

Speech speed

182 words per minute

Speech length

1299 words

Speech time

428 secs

Sheetal Kumar

Speech speed

187 words per minute

Speech length

1568 words

Speech time

502 secs

Thomas Schneider

Speech speed

172 words per minute

Speech length

1931 words

Speech time

673 secs

Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis uncovers several significant points concerning gender equality and cybersecurity policies. One notable issue is the exclusion of women, girls, and individuals of other genders from discussions with the private sector and tech companies. This exclusion leads to a lack of diversity and representation in decision-making processes, potentially resulting in policies that do not adequately address the needs and concerns of all individuals.

Another concerning finding is the resistance to including gender language in the final text of policies. This pushback may arise from factors such as a resistance to change, a lack of understanding of the importance of gender-inclusive language, or intentional efforts to maintain the status quo. This resistance highlights the need for greater awareness and commitment to gender equality in policy-making processes.

On a positive note, the analysis recognizes the essential role of including a gender perspective and intersectionality in cybersecurity policies. By considering the experiences and challenges faced by different genders and intersecting identities, policies can be more comprehensive and effective in addressing cyber threats. This recognition emphasizes the importance of adopting an intersectional approach when developing cybersecurity strategies.

Furthermore, civil society and the United Nations are identified as key actors in ensuring gender-inclusive policies. Their involvement in advocating for and monitoring the implementation of gender equality measures can contribute to creating an environment that values and promotes the rights and representation of all genders.

Another noteworthy insight is the recognition that gender equality is a task that requires collective support, not just from women. It is important for everyone, regardless of gender, to actively contribute to achieving gender equality and dismantling gender-based discrimination and inequality.

Education is highlighted as a crucial tool for combating setbacks in gender equality. By promoting education that emphasizes gender equality principles and human rights, societies can foster greater understanding, empathy, and equal opportunities for all individuals.

However, limitations arise during negotiations, as member states often draw red lines that restrict progress on gender language. This observation suggests that political considerations and differing priorities among states can serve as obstacles to advancing gender equality within policy frameworks.

Additionally, the analysis emphasizes the need for a gender framework for digital transformation and cybersecurity. This framework should account for the specific challenges and vulnerabilities faced by different genders in the digital realm, ensuring that cybersecurity policies and practices are inclusive and responsive to diverse needs.

In conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It highlights the need for increased diversity and inclusive decision-making processes, the importance of gender-sensitive language, the role of education in promoting gender equality, and the significance of international cooperation and civil society engagement. These insights can inform policymakers, stakeholders, and advocates working towards gender-inclusive cybersecurity policies and contribute to building a more equitable and secure digital future.

Speaker 1

The analysis underscores the critical need for cybersecurity awareness among citizens and businesses. Policymakers should actively support collaboration between different sectors to effectively address this issue. By fostering cooperation and sharing knowledge, policymakers can enhance cybersecurity practices and protect individuals and organizations from cyber threats.

Furthermore, it is crucial for policymakers to take the lead in creating awareness about cybersecurity among citizens and businesses. They can educate the public about potential risks and promote best practices for safeguarding personal and sensitive data. This proactive approach can contribute to an overall improvement in cybersecurity measures and reduce the likelihood of successful cyber attacks.

The analysis also highlights the importance of respecting human rights within the domain of cybersecurity. Policymakers should integrate human rights as a fundamental principle when formulating cybersecurity policies. It is vital to remember that real people are affected by cyber threats, and their rights and privacy should be protected. By considering human rights, policymakers can strike a balance between ensuring cybersecurity and upholding individual freedoms.

Additionally, the analysis underscores the importance of balancing innovation with securing the digital infrastructure. Many young people are involved in both positive and negative innovations in the cyber domain. Policymakers need to find a middle ground that encourages and supports innovation while ensuring the security of digital infrastructure. This balance is essential for fostering technological advancements while safeguarding against potential vulnerabilities and cyber threats.

The analysis also emphasizes the significance of including vulnerable populations in policy considerations. Often, vulnerable populations are overlooked or ignored when it comes to cybersecurity policies, resulting in their problems being disregarded. By actively including these populations in policy discussions and decision-making processes, policymakers can address their unique needs and challenges. This inclusive approach helps ensure that the concerns and vulnerabilities of all individuals are taken into account in cybersecurity strategies and initiatives.

In conclusion, the analysis highlights the importance of cybersecurity awareness, collaboration, and human rights considerations in policymaking. Policymakers play a vital role in creating awareness, fostering cooperation, and protecting human rights in the realm of cybersecurity. Moreover, finding a balance between innovation and security, as well as actively including vulnerable populations, are instrumental in developing comprehensive and effective cybersecurity policies. By considering these factors, policymakers can enhance cybersecurity practices, promote a safer online environment, and work towards achieving the relevant Sustainable Development Goals.

Veronica Ferrari

Various speakers have emphasized the importance of including a gender perspective in cybersecurity discussions. Gender is not only a technical issue; it involves power relations and encompasses differentiated risks and needs experienced by individuals. The recognition that cyber incidents disproportionately harm specific social groups based on factors such as gender, sexual orientation, race, and religion is growing. There is also evidence that legal cyber frameworks are being exploited to persecute women and LGBTQ individuals.

To promote a gender-inclusive approach to cybersecurity, there have been calls to integrate a gender perspective at national, regional, and international levels. The Association for Progressive Communications (APC) has developed a specific tool/framework to achieve this goal.

Concerns were specifically raised about cyber laws in the Asia-Pacific region, where shrinking civic space and challenges to civil society inputs were highlighted. It was noted that cyber-related laws can be used for censorship and criminalization, with specific issues concerning the Philippines.

Additionally, there was a discussion on the gender perspective of cybercrime legislation and the strategies employed. Jess and her organization have conducted research and advocated for gender perspectives in cyber policy discussions. Veronica Ferrari showed interest in gaining insights into the gender perspective of cybercrime legislation from Jess.

The international dynamics of gender and cybersecurity were also examined. The appearance of gender considerations in multilateral processes on cybersecurity was addressed, with David providing his views on the important factors to consider for a gender perspective at the international level.

In order to link a human-centered approach to existing agendas such as sustainable development and digital economy indicators, recommendations were made within a gender framework. This highlights the importance of aligning cybersecurity with broader goals and keeping a focus on human well-being.

Veronica Ferrari agreed on the significance of continued advocacy, research, and raising awareness about a human-centered approach while rethinking the concept of security. This emphasizes the need to push for gender inclusion in cybersecurity, generate more evidence, and promote a shift in security perceptions.

In conclusion, integrating a gender perspective into cybersecurity discussions is vital. Recognizing and addressing differentiated risks and needs, the disproportionate impact of cyber incidents on different social groups, and the misuse of legal frameworks are crucial steps towards establishing a more inclusive and equitable approach to cybersecurity.

Kemly Camacho

The analysis delves into various aspects of cybersecurity strategies and the involvement of different stakeholders in promoting gender equality. One key point highlighted is the significance of budget allocation in cybersecurity strategies. For instance, the discussion brings up Costa Rica’s cybersecurity strategy, which primarily focuses on reacting to cyber incidents rather than proactive prevention. This indicates that budget allocation plays a crucial role in defining the government’s vision and priorities, including whether gender is prioritised in the strategy.

Another significant aspect discussed is the role of civil society and training in cybersecurity. Sula Batsú, an organisation, is mentioned for convening a network of organisations across different fields to advocate for cybersecurity. They also conducted a comprehensive six-month training programme aimed at educating various sectors about the importance of cybersecurity. This evidence underscores the positive impact civil society and training can have in enhancing cybersecurity measures.

A mixed sentiment is observed regarding the private sector-led push to include more women in cybersecurity. While the intention appears to encourage gender equality, there is concern that this push may be driven by the private sector’s need to address resource gaps, rather than a genuine commitment to gender equality. This highlights the importance of ensuring that motivations for gender inclusion are rooted in equality and not solely economic interests.

The analysis also advocates for greater women’s leadership in the IT and cybersecurity sector. It highlights the stagnant percentage of women in the Latin American IT sector, which has remained unchanged for the past 15 years despite investments and efforts. The unique qualities and analytical leadership that women can bring to the sector are recognised as valuable contributions.

Furthermore, the analysis emphasises the need for safe digital spaces, drawing a parallel with the concept of safe neighbourhoods. It suggests that just as people require a safe physical environment, they also need a safe digital space. While the initial idea of integrating women in the IT sector is viewed positively, it is argued that more needs to be done to ensure genuine inclusivity.

Additionally, the analysis draws attention to the violence faced by women in the IT sector, framing it as a form of violence against women. It highlights that the challenges experienced by women in the sector are often not integrated into conversations around violence against women. The existence of extensive research on the difficult conditions faced by women in IT further supports this assertion.

Overall, the analysis sheds light on various dimensions of cybersecurity strategies, the importance of stakeholder involvement, and the need for gender equality. It provides evidence and insights into the factors that influence cybersecurity strategies, the role of civil society and training, private sector motivations, women’s representation in the sector, the need for safe digital spaces, and the recognition of violence against women in the IT field. These findings offer valuable considerations for policymakers, organisations, and individuals seeking to promote cybersecurity and gender equality.

Speaker 2

The cybercrime law in the Philippines has faced significant criticism due to its potential threat to the rights of women and LGBTQ+ individuals. One of the main concerns stems from the broad parameters and nebulous key terms surrounding the provision about cybersex, which is seen as a potentially serious threat to these marginalized groups. Additionally, the law also criminalises cyber libel, further limiting freedom of expression and raising concerns about possible misuse by authorities.

Another issue with the cybercrime law is the imposition of excessive penalties for crimes involving the use of Information and Communication Technologies (ICTs). These penalties may not be proportionate to the offences committed and can lead to unfair and disproportionate punishments.

However, there has been positive development in recent times. The problematic provision regarding cybersex in the cybercrime law has been repealed. This significant change is the result of years of advocacy by women’s rights groups that tirelessly worked towards addressing the flaws in the legislation. The repeal was enacted through a provision under new legislation addressing online sexual abuse and exploitation of children, demonstrating a shift towards a more comprehensive approach to protecting vulnerable individuals online.

The success of repealing the problematic provision highlights the importance of collaboration and building alliances to effect changes in flawed cybersecurity policies. Women’s rights groups, children’s rights groups, and LGBTQ+ groups came together to advocate for the repeal. Their concerted efforts, along with the support of a champion in the Philippine Senate who is open to dialogue with civil society, have been crucial in achieving this positive outcome.

Overall, while the cybercrime law in the Philippines still has its flaws, the recent repeal of the problematic provision about cybersex is a significant step towards addressing concerns about gender and human rights. It underscores the power of advocacy and collaboration in bringing about meaningful changes in policy. The journey, however, does not end here, and continued efforts are needed to ensure that cybersecurity policies align with international standards and protect the rights of all individuals in the digital realm.

David Fairchild

The analysis of David’s remarks sheds light on several important points concerning gender inclusion in cybersecurity and international policy. David underscored the significance of multilateral processes in advancing this cause. He noted that Canada has consistently supported gender issues as a crucial component of their foreign aid policy, reflecting the country’s commitment to promoting gender equality on the global stage. However, David also expressed concerns about the potential negative consequences of overemphasizing gender. He cautioned against an excessive focus on gender, highlighting the strategic disadvantages that can arise from such an approach.

In addition to advocating for multilateral processes, David highlighted the importance of education and understanding in addressing gender issues within technical fields. Specifically, he referenced the International Telecommunications Union, emphasizing the need to ensure that gender equality and understanding are prioritized in highly technical areas, where human rights may not always receive sufficient attention. David further emphasized that gender equality should not be viewed solely as a women’s issue, but rather as an issue that requires the support and involvement of everyone.

The analysis also revealed David’s observations on the ongoing debates and pushbacks surrounding gender language, even within progressive platforms like the UN. He cited an unnamed state’s call to end the integration of gender-related language in UN documents, demonstrating the challenges faced in promoting gender inclusion. Moreover, David noted that some countries or blocs may use gender language as a bargaining chip during negotiations, further complicating the progress towards gender equality.

In conclusion, David’s remarks emphasized the crucial role of multilateral processes in promoting gender inclusion in cybersecurity and international policy. While commending Canada’s ongoing support for gender issues, he warned against the negative effects of overemphasizing gender. David stressed the need for education and understanding regarding gender issues in technical fields, highlighting the International Telecommunications Union as an example. Furthermore, he highlighted the ongoing debates and pushbacks surrounding gender language, underscoring the challenges faced in advancing gender equality. The analysis revealed both positive and negative sentiments expressed by David, reflecting the complexity and ongoing nature of these important issues.

Session transcript

Veronica Ferrari:
We are a small group, but we are a small group, and we are very happy to be here. We are a small group, but we are a small group, and we are very happy to be here. I’m the advocacy coordinator at the Association for Progressive Communications. I invite those who are in the room and want to come a bit closer, that’s fine. We are a small group. So the idea is to have a conversation and to hear from you also. So quickly, for those who don’t know about me, I’m the social and environmental justice and interceptions within digital technologies. And in today’s session, we are going to discuss, as you may know, about gender perspectives in cybersecurity. Specifically cyber security policy. So we all know that traditionally cyber security debates were mainly centered on national security, the security of systems. But in recent years, we are seeing an increased focus on national security, and we are also seeing an increased focus on international security, and we are also seeing an increase about the need for human rights-based approach to cybersecurity, which is an approach that places humans at the center, since they are the ones impacted by cyber threats, cyber operations. And additionally, we see more and more recognition in international, regional, and national spaces about the fact that different social groups are in different parts of the world in the internet use each of these platforms, comprising governments and national security servers, using international names for different campaigns in shutdowns , and research by the Association for Progressive Communications and others have sun that cyber incidents disproportionately impact and harm individuals and groups in society on the basis of the gentlemen, but also their sexual orientation, their gender identity or expression, but also because of race, religion, and gender identity. So, we have been documenting and producing research that shows that around the world, legal cyber frameworks are being used to silence and persecute women, LGBTQ people, for their activism, their gender expression, or simply because of expressing dissent. What do we mean by a gender approach to cyber security? How we can integrate such a perspective in debates at the national, regional, and international levels? So, we have a lot of questions about this, and we have a lot of questions about how we can integrate this perspective, and also, it would be great to discuss what issues this agenda should focus on in the future. So, for this, we have great speakers here that will be sharing examples of how cyber security directly affects the lives of women and diversities in different regions of the world. They will tell us also what is the status of the integration of gender in cybersecurity, as well as what would be the biggest challenges that we need to face incredibly economically. So, quick intros for our great panel. Our first panellist is Kemli Camaccio, and is the co-founder and current general Next, we have Grace Kitaga, CEO and convener of the Kenya ICT Action Network, KICTANet, which is a multi-stakeholder platform for people and institutions interested and involved in ICT policy and regulation. Also joining is Jasmine Pazis from the Foundation for Media Alternatives, where she works on issues related to privacy data and cybersecurity. And finally, we also welcome David Ferchil, First Secretary at the Department of Mission of Canada. David focuses on digital policy and cybersecurity and represents Canada, for example, at the UN Open-Ended Working Group on ICTs. So, again, thank you all for being here. So we plan to have a round of interventions from our speakers, and then the idea is to open the floor up to questions. So, before we dive into the discussion, I quickly wanted to provide like a background of APC’s thinking on gender and cybersecurity, and also a bit more about the specific tools that we have. So, first of all, thank you all for being here. I’m very happy to be here. I’m very happy to be here. I’m very happy to be here. I’m happy to be here. I’m happy to be here. And also a bit more about the specific tool we have developed that we have here, these brochures. We have copies in English and Spanish, and you can use the QR code here to download it. So, firstly, for us, it’s important to note that a gender approach to cybersecurity is not only a women’s issue, that gender goes beyond that, and gender is about power relations. The idea also that cybersecurity is not only a technical issue, that technological and also policy solutions can actually contribute or be used to mitigate discrimination and inequalities in societies. So, for APC, a gender approach to cybersecurity is about understanding and addressing differentiated risks and also needs faced by complex subjects. So, for us, cybersecurity is not only a gender approach to cybersecurity. Cybersecurity should be explicitly intersectional, so it should take into account gender, but together with other intersections and factors that compose our identities, such as race, ethnicity, religion, class. So, cybersecurity is actually responsive to the diversity of We have a lot of work to do, and we have a lot of work to do to make sure that we are able to implement the best security priorities, the perceptions, and practices of different groups and people. Our approach also recognises that we are all active subjects that have agency in the process of creating a more secure environment online for everyone, and questions and works to overcome one of the main challenges regarding the security of the internet. We have a lot of work to do, and we have a lot of work to do to make sure that we are able to implement the best security priorities, the perceptions, and practices of different groups and people, and questions and works to overcome one of the main challenges regarding the security of the internet. So, all in all, this perspective means that in every step of the design, implementation, and evaluation of cyber security measures and policies, the goal should be to positively impact the greatest number of people in all of the diversity and diversity of the world, and to make sure that we are able to implement the best security priorities, the perceptions, and practices of different groups and people, and to make sure that the security of systems, as well as human rights, are affected and weakened. So, the framework that I mentioned before, just a quick about that, so, basically, from our research and an initial mapping that we conducted at the APC, we found it difficult to find references to gender and gender-specific policies, and we found that there were not enough references to gender-specific policies, and we found that there were not enough technical recommendations or guidance on how to incorporate such a perspective into cyber policy. So, because of that, we believe it was key to offer a reflection on why it is important to include a gender approach to cyber, but also guidance on how to do it. So, in collaboration with cyber security and gender specialist activists, and also policy makers, we developed this framework, and we think that this framework could be a useful tool for civil society when working at the national level, engaging in regional discussions, and also in the international level, and we also think that this could also feed the discussions happening, for example, at the UN on cyber security. So, basically, we want to help and support different audiences and groups in different ways. So, this framework is made up of an overview, it’s a document that combines norms, standards, and practices, and it’s a document that is used to understand the role of gender in cyber security, and it’s a document that is used to understand the role of gender in cyber security, from human rights council resolutions, to ITU guidelines, report of UN processes. We have another document that maps the existing research addressing gender and cyber security that is still scarce, but has been growing in the last years, and also, an assessment tool that provides the practical recommendations to develop this for different audiences and in different ways. Also thinking about the international organizations and the regional organizations that are the ones that provide advice for the development of cyber security strategies. So basically this framework has been designed as a starting point. We acknowledge that the recommendations are general and we need to adapt them to local and national context. This is why we have been organizing regional conversations with civil society, with policymakers, in regional IGFs to socialize and also enrich this framework and we are discussing it now with the IGF community. So I will stop here. So we would like to hear from our speakers. Kimberly, if that’s okay, I will start with you. So Sula Batu has a lot of experience engaging in cyber policy in Costa Rica but also in Central America. So I wanted to ask you what are in your view the main issues that a gender perspective on cyber security should consider in the region and also what do you think is the status of the integration of a gender perspective in cyber security policy in Costa Rica and broader. So yeah, I love to hear your thoughts on this. I can pass you this mic. Okay, you have it there. Thank you, Kimberly.

Kemly Camacho:
Thank you. Thank you, Veronica. Thank you for the invitation. Thank you, everybody, for being here. This is, I think, a really key discussion. I decided to go to the very practical aspects based in the experience that we have had now for since 2018 working in integrating gender in policies in general and the cyber security strategy in Costa Rica. I’m going to try to reflect a little bit and get some of the good practices and the lesson learned also. Just very fast, Sula Batu, we participate in Costa Rica, we have a policy of gender science and technology which is big framework to work in these issues. And we participate very actively in the building of this policy, and then later we were designated to elaborate the monitoring and evaluation framework for the policy. And now, this year, we are designated to begin, when I return, to develop the action plan for the policy of science, gender, of science, technology and gender in Costa Rica, yes? And also, we are part of the national committee as representative of the civil society organization, the National Committee for the Cybersecurity Strategy. And one thing that, first, here, my first thing is, I don’t know in all your countries we have this strategy, mandatory in Costa Rica, you have to have a committee, a multi-stakeholder committee to develop and to follow up the policies and the strategies. Then we are part of this committee. The first thing that I have to say is budget. Budget, where the budget is allocated. I think this is, to be honest, the possibility to do or not to do things, yes? And to define which is the vision of one government, yes? About what you are going to prioritize, and if gender is prioritized in the strategy, is the budget. Then this is something that we, one of the lessons learned, or the thing that we wanted to share. As Veronica said before, when we began, we have two moments in Costa Rica. Costa Rica was hacked as a country in 2020, exactly after the pandemic. Yes, we were hacked totally as a country, yes. Health data, banking data, everything was hacked as a country. Then there is a before and after for the hacking, yes. And also, at the same time of the hacking, we got an authoritarian government, yes. And the other was more open. And we have hacked and have an authoritarian government, OK? Then I wanted to say first that the cybersecurity strategy was totally, at the beginning, totally oriented to attend, to take care of the attacks. That is the cybersecurity policy. I imagine in many of your countries is the same, yes. Nothing more than that. And all the budget was related to react to the cyber. Even with that, we were hacked as a country totally, OK? But when we were hacked, something very important is because of the country was not prepared, they asked the country, they asked to the private sector to be in charge of the cybersecurity of the country, OK? And this is something that continues happening, OK? And also, they asked some governments to support the country in the cybersecurity part. Then in this context, we have tried to integrate the gender perspective in the cybersecurity strategy, yes. Then what do we do to try to integrate the gender in the cybersecurity strategy? One thing that we do was. to convene civil society organisations as a network, and we, as representative of the civil society organisation, we convene a network of organisations that were not interested in cyber security at all. Organisation working with kids and young people, organisation working with sexual workers, organisation working in environment, organisation working in VHS, organisation, LGBT organisation, a really big network of organisations to do the advocacy based in this big movement. If not, it’s for us impossible to integrate a gender perspective in the strategies. One first thing I don’t see here, we participate on that, then it’s something for ourselves also, is something that we have to do with this organisation was a training programme about what cyber security is, yes, and why it’s important for organisation working in indigenous aspects, yes, that the education part about what cyber security is, using a popular education approach, I think it’s something that we have to do. For this organisation, even more than cyber security, they are worried about the management of the personal data, yes, and not necessarily it’s connected, but it’s not the same. Then, I think this is something that we have learned, we have to dedicate almost six months of training programmes to really, for the people to understand not only what is cyber security, but which is the connection between cyber security and sexual workers, for instance, yes, then this process is, for me, crucial, crucial because it’s the only way to really advocate, we believe a lot in that advocacy-based … social movements. Then this is one point I wanted to say. We have discussed in, I don’t remember in which of the panels, but the issue of the consultation, because we were consulted by the strategy builders, yes, but this consultation, we participate a lot, we dedicate a lot of time, we did the recommendation, we put, we comment everything, and when the first cyber security strategy came out, any of our comments were integrated of the civil society comment. Then this consultation process is also something that we have to take a lot in count. I’m going to finish very fast, I have other things, but I wanted to say that after the hacking and the authoritarian government come, we continue the last cyber security strategy, I don’t know if that happened in your countries, but when the government change, they trash it, okay? All these processes, they trash it, and they begin another process, okay? They begin another process, and then this is something also that we have to take in count when we are working on these issues, because we have to begin again, all the process, all the process, all the process of developing the strategy. Something that I wanted to say also is in the second strategy, led by the private sector, by the big companies that have their headquarters in Costa Rica, my country, they are pushing a lot for having more women studying cyber security, and this is one of their most important strategies inside the cyber security strategy. Women study cyber security as the gender focus of the strategy, and, of course, this is wonderful, more women in IT, and all of that, even if this is because of the private sector agenda, to cover the deficit of human resources that they don’t have at this moment to answer to all the digital development, yes? And then this part we have to take a lot of care because it’s not necessarily one aspect related with the gender approach to the cyber security strategy. Just two more words for the other. We also have this policy, what we could was to integrate in the policy of gender science and technology a big area related with violence against women, a big area, a big strategy, yes? And then we could integrate that, and because this is the umbrella, maybe we can take this part to develop the strategy. And then we could integrate that, and we also could integrate data monitoring of the gender, and not only women, but gender, yes? Data monitoring related with VHS, people, sexual workers, etc., yes? About that violence against them. Even if we know violence against gender diversity is not the only thing, but those are the issues that at the moment, at this moment, we could integrate based in our practice. Then I leave it there. Okay. Thank you, Gemily. Yeah, so many great things that Gemily shared

Veronica Ferrari:
from their experience working at the national and regional level. from the need of awareness at the very beginning, the need to form coalitions and to be linked also with organizations working on other agendas, not necessarily on cybersecurity and gender, but human rights, development, children rights more broadly, and also the need to think a gender perspective in cyber beyond the idea of diversity and inclusion of more women in ICTs, which I think is really important. So I’d like now to turn to Grace. So Grace, you also have extensive experience working in cyber security policy at the national, the regional, and also the international levels, but direct work, for example, on cyber capacity building for groups that experience marginalization, such as women, but also persons with disability, person living in rural areas. So I wanted to ask you about, like, more the intersectional challenges that, for example, policy makers should consider when working on cyber security policy, and also how can these policy makers effectively address these intersectional challenges that are about gender, but also about broader inequality issues? So yeah, I would love to hear your thoughts about that.

Speaker 1:
Okay, thanks, Veronica. I think before I respond to your question, I just wanted to say that at Kiktonet, we work on cyber security and cyber hygiene, and that is in line with our mandate to push or agitate for inclusion of communities in ICTs and in whatever that we do. So for example, we have been working, we have dedicated an entire program on just working with women in all their diversity, and this has included training women in digital security and in cyber hygiene practices, and just encouraging them to form communities of practice so that they’re able to protect each other, especially when they are attacked online, and to also to be able to push for their issues so that then they can get that policy attention. We also, you know, in terms of also. working with other groups like you have raised. We work with persons with disabilities. We also work with farmers, we work with home care givers, and we also work with youth in the informal settlements. In terms of supporting, that’s our work at national level, and we also sit at the Kenya SART as the civil society representatives. In terms of regional work, we run what we call TATUA, Digital Resilience Center, TATUA. It’s Swahili, meaning that it will solve, solve, and this is to support social justice organizations, organizations that are working in very sensitive issues to basically enhance their digital resilience. And then, of course, internationally, we participate in the open-ended working group, just making sure that we are bringing on board the perspectives of ordinary people in those conversations. Now, the question that you have asked me about shaping that, you know, about contributing, or what policymakers should consider at that intersectionality, I think the first thing that I want to say is that cyber security, unless, unlike your other policy issues, I think, you know, it’s a complex, it’s a complex issue, as it requires a multifaceted field that intersects with different stakeholders and different domains. And therefore, because of that, I think policymakers need to consider certain range of intersectional challenges. But I’m just going to highlight three, and one of them that has been drawn from my experience of working. the community members who are not particularly knowledgeable about cyberaffections, I think you would agree that in some ways, it is titre singing on the issue of cybersecurity awareness. That policymakers sometimes will see in the cybersecurity policy, there is a need to understand what is happening in the cybersecurity policy, and what is happening in shaping some of these policies. So there is a need to understand what informs those who are the perpetrators, but also for ordinary people, do they really understand what cybersecurity is all about? So cybersecurity is very critical, that is a very important part of the policy. So, I think that is a very important part of the policy, and, you know, apart from coming up with the policies, there is need for them to be at the forefront of supporting awareness creation among citizens, and among businesses, and to support that collaboration between different sectors on how, you know, on how citizens are addressing the content that is being discussed by philosophers it can be used by proof-achiever, or platforms to help us celebrate the services and the corresponding existing law and conduct of the society. That’s one part of our perspective, and then, the second issue I wanna talk about is on human rights, you know, when we work in civil society, we work with ordinary people, we do not have a public place to give public space to speak about what other institutions are doing in such a way to listen to the sayings and what else do we think, this participation that’s being done, you know, by people like Tatyana who spend very little time contacting the So, I think that’s a very good point, and I think it’s important to think about how do we make sure that we have the right policies and the rights as a vital consideration, and when I say about vulnerable populations, it’s because there’s that element of thinking up here and forgetting that there are people who are affected here, and not thinking that the issues of the people down there matter. And, finally, when it comes to cyber attacks, I think it’s important to remember that we have a lot of young people who are in the system, and we have all these young people who are innovating, and we also have them innovating both positively and negatively because we have a lot of cyber attacks actually coming from young people who, you know, young people who are unemployed and are consistently thinking of how they can make money, so they are always thinking of how to break into banks, into companies, and, therefore, the tendency is to respond to that with, you know, with a policy that sometimes curtails innovation, and, therefore, policy-makers need to keep up with the rapidly-evolving technologies, and that ever-changing threat landscape, and the threat of landscape is that today, threats are going to be identified, and once people know that those have been identified, they are consistently thinking of how to go behind what has been done to come up with new threats, and, therefore, policy-makers need to be above, so there is need to balance that need for innovation with securing the digital infrastructure. Thank

Veronica Ferrari:
you. Thank you, Grace. I think, yes, it’s a really critical point, one of the things that you mentioned. You mentioned a lot of critical points, but I was thinking about the need to actually involve the communities and these groups that experience this. experience these differentiated impacts and have specific needs and perceptions around cybersecurity when policy securities are actually drafted, but also implemented and evaluated. So in the framework that we put together, there are some recommendations in that regard, so I think it’s a key point to have in mind. Thanks so much for sharing about that, Grace.

Speaker 2:
I would like to turn to Jess now, if that’s okay. So as I mentioned, we were organizing some regional conversations around this framework. We organized a good session during the Asia-Pacific IGF. So participants there highlighted challenges, for example, in the region related to a shrinking civic space, challenges for civil society inputs to be taken into account. That challenge clearly we heard from Kemley appears in other regions in the world. And also another thing that came up in that conversation is cyber-related laws that are ultimately used to censor and even criminalize. So you and your organization have done research and advocacy around those issues in the Philippines context. So I wanted to ask you if you can briefly share what were, for example, problems from a gender perspective with cybercrime legislation there in the Philippines. And it could be, I think, useful for all of us if you can share about what strategies you put in place to engage in cyber policy discussions to bring gender and feminist perspectives. So yeah, thanks so much, Jess. Thank you, Veronica, and thank you, firstly, for inviting us to share our experiences from the Philippines. We also have a national cybersecurity plan, which is actually currently in the process of being updated this year, so I hope we’ll have time later so I can also talk about that. But as to the cybercrime law, which is another piece of legislation that’s very crucial and impacts gender a lot, well, let me start by saying that the cybercrime law of the Philippines actually has a lot of problems. So from a human rights perspective. in general, so we have the criminalization of cyber libel, we have this very generic and wide-reaching blanket provision that imposes excessive penalties to crimes that are done with the use of ICTs, but one of the most problematic provisions, especially related to gender, is that the law introduced this new crime called cyber sex, and it was very broadly defined as the willful engagement, maintenance, control, or operation, directly or indirectly, of any lascivious exhibition of sexual organs or sexual activity with the aid of a computer system for favor or consideration. So it’s a very broad definition, and the law didn’t even define some of the critical terms here, but lascivious exhibition, what do we count as sexual organs, what do we count as sexual activity, which makes this provision prone to arbitrary interpretation of whoever is made to interpret it, and so that brings us a situation where even things like consensual acts done online, or artistic works, or works of art, or legitimate expressions of women and LGBTQ persons, for example, could fall under this criminalized provision, and also considering that the Philippines is still, the Philippine society and Philippine culture is still highly patriarchal, we’re very predominantly Catholic, so there’s still a lot of conservative values there, and with this policy being made subject to these kinds of moral standards, it really disproportionately endangers women, LGBTQ persons, and their rights to their freedom of expression. The good news is that this provision has actually been very recently repealed, I think early last year. It was not through an amendment of the entire cybercrime law, it was through a repealing provision under a new legislation on online sexual abuse and exploitation of children, so it was quite an unconventional route that it take, and it was not ideal. but also I think we also need to recognize that this was also a product of years of advocacy by women’s rights groups, by LGBTQ advocacy groups in the Philippines. And as to the second part of your question was on the strategies, so what’s the strategies that led to this small victory as I consider it? It was, like Kamli mentioned earlier, it was really working with the networks. It was a lot of collaboration and coordination across different advocacy groups. So women’s rights groups, for example, children’s rights groups, because like I said, it was repealed under a law on online sexual abuse and exploitation of children. So we worked also with children’s rights groups, LGBTQ groups. So because, like I said, because the law is very problematic on a lot of different points, it was also very clear to us early on that we had to also attack it from different points of entry. And it was also fortunate for us to have a champion in the Philippine Senate who is a staunch advocate of women’s rights and also remains open to speaking with civil society on various issues, including cyber policy. So that was, I think, it was a key point in pushing for that kind of legislative change. No worries. Thanks so much. Just, I think that was a key point, the idea of how, well,

Veronica Ferrari:
the idea of forming coalitions and also identifying the champion within the government that could actually be working on cyber security specifically or not. So I would like now to move to some of the international discussions, and to David, because I would like you to share a bit of what’s happening on the international level and how do gender considerations appear in some multilateral So I’m going to turn it over to David, who is going to talk about the importance of the multilateral processes on cybersecurity, for example, the UN open-ended working group on ICTs, and what are, if you can share, in your view, crucial factors that a gender perspective on international cybersecurity should consider moving forward. So over to you, David, thanks.

David Fairchild:
Great. Hi, everybody. It’s end of day, it’s day three, bottom of the seventh, nine o’clock in the morning, and I’m going to talk a little bit about the importance of the multilateral processes. I’m going to spend the time, I’ll pretty much dump most of it, and I think get to the point. Canada has long supported gender issues at the international level. It’s a core component of our foreign policy, and our foreign international aid policy, so I don’t think it goes without any saying that, of course, we support this issue entirely. It’s a core component of our foreign policy, and our foreign international aid policy, so I don’t think it goes without any saying that, of course, we support this issue entirely. It’s a core component of our foreign policy, and our foreign international aid policy, so I don’t think I will spend a lot of time on that, despite the fact that I’ve probably got two pages of notes, none of it is really that relevant. I think what is really relevant is sort of painting a bit of a canvas of what is going on, because I think what people only see is the final product, right? So, I think what we see is the final product, and I think what we see is a lot of the things that we see in between, in the interim period, behind the closed doors, where countries like Canada and like-mindeds are fighting for inclusion of specific language that I think we would all agree with, and there are a cast of countries which I won’t bother naming, I’m sure you can figure out who they are, are doing for their own purposes, have an alternative narrative that they’re pushing. This is a constant fight. It is not going away, and I would say that this is not going away. I do cover lots of UN agencies. I sit in Geneva, so including the Human Rights Council, where this is often a front-and-centre element to many negotiations. This is just more of a clarion call to repeat that we are not necessarily the war, we are winning battles, but the war is not over, and I think it is in critical importance that we continue to frame our activities in a rights-respecting way. So, I think this is an important message. The WG itself has a norm, Norm E, which says that countries must respect, in the uses of cyberspace, basic international frameworks, including UDHR. Some countries, as we know, don’t necessarily respect, they may respect the principles and say that they respect the framework, but their implementation of the framework is not going away. We are seeing backsliding on SOGI language, we are seeing efforts by some countries to reframe how we talk about rights away from individual rights to people-centric rights, which we know is a crafty way of reducing the role of the individual and upplaying the role of the state. These are unfortunately traps that some people fall into because what starts to happen is that these languages are brought to different forums, they’re brought in different ways, and some of the people in the meetings aren’t necessarily as imbued with the human rights expertise as in other places. So we see this in places, I cover the ITU, which is also a fascinating place if you want to spend a few hours. We see, one would think standards are not necessarily political, but we do find sometimes we get wrapped around the axle fighting over gender language. I’ve been up till midnight, two in the morning, fighting about inclusion on gender language in a technical standard negotiation. It’s not pleasant, but it’s necessary. And so I don’t really want to spend time with the notes because I don’t think that’s really what’s relevant. I think it’s really to reinforce to this community that of course, Canada, but in person, we are in the room, we are fighting, but we need support. I think we need to continue to raise our voices to those who disagree. I think we need to be sophisticated. There is also a trend of course of overemphasizing gender and that in fact has a strategic negative effect. So it’s being smart, it’s being nuanced, and it’s being appropriate to where we want to push it. But I think we just need to keep pushing. This is not going to go away, and frankly as we all see, cyber, digital, tech is becoming much more front and center in international geopolitics, geostrategic competition. And so I think there is a new demographic of fora that are not necessarily well imbued with the human rights understandings that other fora like the Human Rights Council and others have a much more sort of mature conversation and folks who understand the issue. So it’s imperative that we support the technical community. It’s imperative that we support the civil society in the member states to the extent that we can to understand why we need to make sure that there’s no backsliding and that we reinforce the existing international human rights frameworks. I think that’s more important than probably what somebody from Ottawa sent me yesterday. I will stop there.

Veronica Ferrari:
Thanks for that, David. Well, we just were talking about the need to identify champions and Canada has been pushing for the inclusion of this type of language in negotiations and also being a key ally in terms of civil society participation in some of these international processes and how important it is to have the groups as Grace and others were saying, affected by these operations in a differentiated way like in the discussion too or the organization that we try to bring these perspectives there. So thanks for that. I wanted now, I have a couple of questions. I don’t know if we can technically showcase them there instead of seeing my face at that size. So yeah, I have a couple of questions for the speakers but also in the case somebody wants to jump in from the audience or physical or online audience because I wanted to quickly hear your thoughts also on main challenges. Jess, you mentioned some of them but Kimley too. So main challenges you have faced or you consider you would face when advocating for gender and intersectional perspectives in cyber policy. Also, any thoughts on how a tool like this, this framework could. I’m going to ask you to provide some support for different stakeholders in integrating a gender perspective into cybersecurity policy and norms, and also what else, like what resources, support, do you think you need to champion gender in cybersecurity policy in your work, any specific resources or guidance that you think could be helpful? I just wanted to open the floor to see if there are any thoughts from the audience, but also I would like to hear from the speakers. So I see a hand there. Do you want to jump in? Yeah. Can you pass the mic to the colleague? Thank you.

Audience:
Thank you so much. My name is Ahmed Karim from UN Women’s Regional Office for Asia and the Pacific, and I have three quick questions. First one, which I also noticed here during IGF, that the conversation with private sector and tech companies is very gender blind, and most of the time it’s very generalizing all users in one basket, or take a global perspective, or the focus is just on the minors, but women and girls are excluded, but other genders are also not part of that design. And I wonder if you have any strategies or specific ways of how can we change that conversation and make them a little bit gender sensitive and include gender in the design itself of their platform? Second question is related to inclusion of the cybersecurity agenda in the national action plans, and I wonder if any of you have had that experience within the national context with national action plan, and what are the elements of the cybersecurity agenda that could be included there? And last question is more for David on those nuances in between the dark side before the text is finalized, and I wonder what are the main issues that really gets the pushback against inclusion of gender language in the final text, and what do you think is where is this coming from, and how can we from civil society and the UN can help in eliminating some of those concerns for inclusion? Thanks. Great question. Thanks so much for that. I see Angela’s hand. Do you want to jump in? And then we try to address or distribute the question. I’m not sure if you can hear me. I’m not sure if you can hear me. I’m not sure if you can hear me. Please go ahead. I wanted to attempt question 3 on what can we do to bring the gender agenda in the cybersecurity space and also to respond to your questions and concerns because I have the same concerns. And this is something I’ve spoke to with Grace that we need to have research on with ourselves. It’s very hard not to go into discussion I have but we both know that it disproportionately affects women, minorities, and sexual minorities. Even just thinking about what kind of data CASE Attackers have on complaints they receive. Reserved then on cybersecurity, it will give us helpful insights on how to deal with these issues. I think it’s very important to have a discussion about the impact in terms of even monetary and mental so that they can enrich the policy decisions that are made. So I think that’s my contribution to that question.

Veronica Ferrari:
» Thanks so much, Angela, for the contribution and the response to the colleague here. Shall we, do you want to address some of the questions? I think we have time for one or two. I think we have time for one or two. I think we have time for one or two. Kimberly? Kimberly, go ahead, please.

Kemly Camacho:
» I wanted to address the first question for me, it’s my passion, to be honest. Because I have been working for, I have been working for a long time in the field of cybersecurity, I have been working for a long time in the field of cybersecurity, and I have to say that we have switched a lot the focus of the work that we are doing there, okay? Because, and I wanted to say, this is really, I think, really important. Yes, because, you know, at the beginning we began with this idea of integrating more women in the IT sector, yes, and do capacity training for them to be integrated in the sector and to have opportunities of jobs in this sector, in the IT sector, because it’s really a sector of opportunities, yes, and I think this is good, yes, but it’s not enough, yes. More and more, we have now in Latin America, I have done for UNESCO a mapping of all the initiatives to attract women to the IT sector and to integrate women in the IT sector, and there is a lot, but at least in our region, the percentage, 20%, 80%, haven’t changed in 15 years, yes, it haven’t changed at all, even with all this effort and all this investment, it hasn’t changed a lot, yes, and I think this is because, I want to say, this is because this IT sector is very expulsive of the diverse, and the condition for women studying and working in the IT sector are hard, yes, then in one point, we decide we are going, instead of continue doing that, others are doing, and we think it’s part of the economical rights of the women, we are going, we are working very much more in creating a women leadership for the IT sector, yes, creating a women leadership, an analytical women leadership, understanding their own conditions, and this is connected with the science. cybersecurity, what means to be part of this society as women and we, women in the IT sector, how we can contribute to the fighting of the women in general? And this is where I connect with the third question, yes, is this solidarity, solididad as we call it in Spanish, yes, where we have to connect the process of getting this women leadership, yes, to reflect on cybersecurity from this really analytical and collective action of women in IT, supporting women. Then for us, this is the strategy of women. We think it’s very crucial that women work and study ITs. But the problem is that we have a lot of evidence because we have done a lot of research, a participatory research with them about this condition where they work and they study, yes, and that we have to change also. For us, this is part of the violence against women, some violence against women that we haven’t integrate in the discussion around violence against women. Then this is my question, a big leadership of women in IT supporting the women agenda, including cybersecurity. And just to finish, we understand cybersecurity as the right of the people to have a safe space on the digital world as they need a safe space in their neighborhood, yes? Then this is the way that we are focusing. Thank you for the question.

Veronica Ferrari:
Just do you want to quickly address some of the questions, then I’ll try to go to David, so we don’t forget that question about the pushback in international negotiation, and there is one more question. Go ahead briefly. Yes, very briefly, because it’s also related to what Kamali said. I was thinking about this. based on the questions that you post, but it might also address your concern. And I really think that we have to go back and re-evaluate our concepts of security. Because unless this is, like you said, how we frame security issues now is still very highly masculinized, you know, and unless this kind of thinking is addressed, everything that we would do, even if we push for policy changes, even if we encourage women to go into tech and ICT sector or the cybersecurity sector, that all of those will just be stopgap measures, you know. We will all, like a new policy will come in and it will regress to the same traditional frameworks that we’re used to and all of that. So, and this is what I also like the most about the APC framework, which is it highlights the need to really go back to our, the ways that we think about security and through that, then we will be able to change policy, change the frameworks, change the institutions and the structures that are you know, already very deeply ingrained in the security sectors now. And change, you know, the attitudes of the actors as well. So, people in government, even people from businesses and the private sector. So, I think that’s really where we need to start. Thanks so much for that, Jess. David, do you want to jump in on the question about discussions?

David Fairchild:
Oh, yeah. All right. Couple things. So, it’s not just, I’m gonna say this may sound a little bombastic, but it’s not just women who are the front and center. I mean, gender is not a gender-specific term. It’s also something that I think, you know, whether you’re a man, woman, or whatever you want to describe yourself as, it’s an effort that everybody has to get behind. So, I’d just like to sort of slightly correct the record that even though I’m a man, that doesn’t mean I can’t be highly supportive of the gender movement. That being said. So, the backslide and how we can fight it. I mean, it really, it’s an upstream. I’d say I would focus on the upstream. So, let’s take, for instance, the International Telecommunications Union. Not a very, let’s say, it’s an old organization. In fact, it’s the oldest organization in the UN. It’s not, it’s very technical. So, human rights is not something that comes up as an idea front and mind for many of these highly technical engineers and so on and so forth. So, it’s really education. But, of course, the, their demographic and the pools of interactions and stakeholders they deal with are not the same, you know, in the human rights world or otherwise. And so, there is a sort of reaching across the the hallway and reaching out, which is not, it’s partly our job, but also, I think, from a civil society. It’s, it’s, it’s, so it’s just a sort of like we say in French, les deux solitudes, the two solitudes, right? There are people who have their demographics and their stakeholders. Sometimes, it’s getting better, but it’s not great. A lot of it is simply because member states have certain things that are red lines. It’s normal. We have red lines when we’re in negotiations, which are framed around our values and our policies in the same way. I don’t have to agree with them. And so, the fight is about trying to find, obviously, the UN works on consensus, which, just to remind, is not unanimity, but consensus tends to focus on getting everybody to agree. And so, sometimes, some countries or blocs will hold out on something of substance because the gender language is something they don’t like. Sometimes, it’s a change. Sometimes, it’s, it’s to have it extracted. Sometimes, it’s just a useful, because they know it’s important to us, it’s used as a weapon to, for concessions and in other ways. So, that’s a bit of the, as they say, say, pulling back the kimono a bit to reveal a little bit what’s going on in the background. But I really want to just finish, and I realize we have two minutes left, so I’ll see the hand up. I won’t name the state, but the Human Rights Council Session 54 is currently ongoing. In one of the item 8 debates a few days ago, a state—I won’t be named—got up and, in a statement, called for the end of the integration of Soji language in UN documents on the basis that it’s not recognized as a legal form of discrimination under international law. Now, this state isn’t perhaps the one you might think would make this statement. I won’t name it. I’m happy to tell you offline. But just to give you an example, it’s happening even in the Human Rights Council. It’s happening everywhere. We have people who understand these debates in the Human Rights Council, and so can defend our values—our values—can defend the human—international human rights framework. So that doesn’t necessarily mean at an IEEE meeting, or at the IETF, or at the ITU, that those same expertise exist. So that’s where the civil society and, I think, stakeholders who are more educated need to work with and help those who don’t.

Audience:
Thanks, David, for that, and I’m aware of the time, but I want to give the opportunity to jump in. And there is another ticket you have in hand? Okay. I’m going to jump out, and then I can try to wrap up. Please. I can also talk about this after the session, but my name is Farzana Badi from Digital Medusa. So we are doing this research for USAID, and they are looking at what human-centered approaches to digital transformation, and one of the strategies that they have is cybersecurity to kind of incorporate cybersecurity in digital transformation. And I was wondering if you know of any kind of, like, gender framework that can help with these development organizations that help with digital transformation to consider gender as a factor when they want to have cybersecurity in place, and kind of like help from the beginning instead of doing things after the technology is in place.

Veronica Ferrari:
Thanks for that question. I know we have to finish the session. It’s okay, I encourage you all to continue the conversation after the session ends. We can, in fact, touch base because we have some recommendations in the framework about how to link this agenda to other agendas, to, for example, the agenda for sustainable development, also to digital economy indicators. So connecting those with broader arguments could be useful, for example, for a digital transformation strategy discussion, but we can continue the conversation after the session. I don’t know, Grace, I want to give you the opportunity to say something before we close, if you want to. No, okay. Thank you for being mindful of the time, and thank you all for the discussions. There are a lot of great points. The need to continue to keep pushing for this, also to produce more research, more evidence, and the importance of continue creating awareness and rethinking the concept of security, as Jess was saying. So thanks so much. Please reach out to APC if you want to stay in touch, and enjoy the rest of the IGF. Bye. Thank you so much. Thank you all for coming out. And have a good day. Thank you to all of you for being here, and we’ll see you next time.

Audience

Speech speed

169 words per minute

Speech length

674 words

Speech time

240 secs

David Fairchild

Speech speed

208 words per minute

Speech length

1671 words

Speech time

482 secs

Kemly Camacho

Speech speed

127 words per minute

Speech length

2032 words

Speech time

957 secs

Speaker 1

Speech speed

174 words per minute

Speech length

1041 words

Speech time

359 secs

Speaker 2

Speech speed

143 words per minute

Speech length

892 words

Speech time

375 secs

Veronica Ferrari

Speech speed

190 words per minute

Speech length

3218 words

Speech time

1015 secs

Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Ananya Singh

The analysis features speakers discussing the exploitation of personal data without consent and drawing parallels to colonialism. They argue that personal data is often used for profit without knowledge or permission, highlighting the need for more transparency and accountability in handling personal data. The speakers believe that the terms of service on online platforms are often unclear and full of jargon, leading to misunderstandings and uninformed consent.

One of the main concerns raised is the concept of data colonialism, which is compared to historical colonial practices. The speakers argue that data colonialism aims to capture and control human life through the appropriation of data for profit. They urge individuals to question data-intensive corporate ideologies that incentivise the collection of personal data. They argue that the collection and analysis of personal data can perpetuate existing inequalities, lead to biases in algorithms, and result in unfair targeting, exclusion, and discrimination.

In response, the speakers suggest that individuals should take steps to minimise the amount of personal data they share online or with technology platforms. They emphasise the importance of thinking twice before agreeing to terms and conditions that may require sharing personal data. They also propose the idea of digital minimalism, which involves limiting one’s social media presence as a way to minimise data.

The analysis also highlights the need for digital literacy programmes to aid in decolonising the internet. Such programmes can help individuals navigate the internet more effectively and critically, enabling them to understand the implications of sharing personal data and make informed choices.

Overall, the speakers advocate for the concept of ownership by design, which includes minimisation and anonymisation of personal data. They believe that data colonialism provides an opportunity to create systems rooted in ethics. However, they caution against an entitled attitude towards data use, arguing that data use and reuse should be based on permissions rather than entitlements or rights.

Some noteworthy observations from the analysis include the focus on the negative sentiment towards the unregulated collection and use of personal data. The speakers highlight the potential harm caused by data exploitation and advocate for stronger regulation and protection of personal data. They also highlight the need for a more informed and critical approach to online platforms and the terms of service they offer.

In conclusion, the analysis underscores the importance of addressing the exploitation of personal data without consent and the potential harms of data colonialism. It calls for more transparency, accountability, and individual action in minimising data sharing. It also emphasises the need for critical digital literacy programmes and promotes the concept of ownership by design to create ethical systems.

Audience

The discussions revolved around several interconnected issues, including legal diversities, accessibility, privacy, and economic patterns. These topics were seen as not always being respected globally due to economic interests and the perpetuation of stereotypes. This highlights the need for increased awareness and efforts to address these issues on a global scale.

One of the arguments put forth was that privacy should be considered as a global right or human right. This suggests the importance of acknowledging privacy as a fundamental aspect of individual rights, regardless of geographical location or cultural context.

Another point of discussion was the need for a taxonomy that identifies specific local needs and how they relate to cultural, historical, or political characteristics. The argument advocates for better understanding and consideration of these factors to address the unique requirements of different communities and regions. This approach aims to reduce inequalities and promote inclusive development.

The distinction between local and global needs was also highlighted as crucial for effective population planning and reducing migration to the Global North. By focusing on empowering individuals to thrive in their country of origin, the discussion emphasized the importance of creating conditions that allow people to stay and contribute to their local communities.

The importance of reimagining digital literacy and skills training was emphasized as essential for empowering marginalized communities. This involves providing equitable access to digital tools and promoting inclusivity in digital participation. Bridging the digital divide was seen as necessary to ensure that everyone has the necessary tools and skills to fully participate in the digital world.

The discussions also delved into the decolonization of the Internet and the digital landscape. It was recognized that this is an ongoing journey that requires continuous reflections, open dialogue, and actionable steps. The complexities surrounding decolonization were explored in relation to factors such as economic gains and the question of who benefits from the current digital landscape.

Lastly, the need to strive for a digital space that is inclusive and empowers all individuals, regardless of their background or geographical location, was highlighted. This vision of a future in which the internet becomes a force of equality, justice, and liberation motivates efforts towards digital inclusivity and empowerment.

In conclusion, the discussions explored various critical aspects related to legal diversities, accessibility, privacy, and economic patterns. They underscored the importance of addressing these issues globally, recognizing privacy as a universal right, understanding local needs, bridging the digital divide, and advocating for a decolonized digital space. The overall emphasis was on promoting inclusivity, reducing inequalities, and fostering empowerment in the digital age.

Jonas Valente

The analysis highlights several important points from the speakers’ discussions. Firstly, it is noted that the development and deployment of artificial intelligence (AI) heavily rely on human labor, particularly from countries in the global South. Activities such as data collection, curation, annotation, and validation are essential for AI work. This dependence on human labor underscores the important role that workers from the global South play in the advancement of AI technologies.

However, the analysis also reveals that working conditions for AI labor are generally precarious. Workers in this industry often face low pay, excessive overwork, short-term contracts, unfair management practices, and a lack of collective power. The strenuous work schedules in the sector have also been found to contribute to sleep issues and mental health problems among these workers. These challenges highlight the need for improved working conditions and better protections for AI labor.

One positive development in this regard is the Fair Work Project, which aims to address labor conditions in the AI industry. The project evaluates digital labor platforms based on a set of fair work principles. Currently operational in almost 40 countries, the Fair Work Project rates platforms based on their adherence to these principles, including factors such as pay conditions, contract management, and representation. This initiative seeks to improve conditions and drive positive change within the AI labor market.

Another concern raised in the analysis is the exploitation of cheap labor within the development of AI. Companies benefit from the use of digital labor platforms that bypass labor rights and protections, such as minimum wage and freedom of association. This trend, which is becoming more common in data services and AI industries, highlights the need for a greater emphasis on upholding labor rights and ensuring fair treatment of workers, particularly in the global South.

Furthermore, the analysis underscores the importance of considering diversity and local context in digital technology production. Incorporating different cultural expressions and understanding the needs of different populations are key factors in creating inclusive and fair digital labor platforms and global platforms. By doing so, the aim is to address bias, discrimination, and national regulations to create a more equitable digital landscape.

The analysis also acknowledges the concept of decolonizing digital technologies. This process involves not only the use of digital technologies but also examining and transforming the production process itself. By incorporating the labor dimension and ensuring basic fair work standards, the goal is to create a structurally different work arrangement that avoids exploitation and supports the liberation of oppressed populations.

In conclusion, the analysis highlights the challenges and opportunities surrounding AI labor and digital technology production. While the global South plays a crucial role in AI development, working conditions for AI labor are often precarious. The Fair Work Project and initiatives aimed at improving labor conditions are prominent in the discussion, emphasizing the need for fair treatment and better protections for workers. Additionally, considerations of diversity, local context, and the decolonization of digital technologies are crucial in creating a more inclusive and equitable digital landscape.

Tevin Gitongo

During the discussion, the speakers emphasised the importance of decolonising the digital future in order to ensure that technology benefits people and promotes a rights-based democratic digital society. They highlighted the need for creating locally relevant tech solutions and standards that address the specific needs and contexts of different communities. This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion.

The importance of stakeholder collaboration in the decolonisation of digital rights was also emphasised. The speakers stressed the need to involve a wide range of stakeholders, including government, tech companies, fintech companies, academia, and civil society, to ensure that all perspectives and voices are represented in the decision-making process. By including all stakeholders, the development of digital rights frameworks can be more inclusive and reflective of the diverse needs and concerns of the population.

Cultural context was identified as a crucial factor to consider in digital training programmes. The speakers argued that training programmes must be tailored to the cultural context of the learners to be effective. They highlighted the importance of working with stakeholders who have a deep understanding of the ground realities and cultural nuances to ensure that the training programmes are relevant and impactful.

The speakers also discussed the importance of accessibility and affordability in digital training. They emphasised the need to bridge the digital divide and ensure that training programmes are accessible to all, regardless of their economic background or physical abilities. Inclusion of people with disabilities was specifically noted, with the speakers advocating for the development of digital systems that cater to the needs of this population. They pointed out the assistance being provided in Kenya to develop ICT standards for people with disabilities, highlighting the importance of inclusive design and accessibility in digital training initiatives.

Privacy concerns related to personal data were identified as a universal issue affecting people from both the global north and south. The speakers highlighted the increasing awareness and concerns among Kenyans about the protection of their data, similar to concerns raised in European countries. They mentioned the active work of the office of data commissioner in Kenya in addressing these issues, emphasising the importance of safeguarding individual privacy in the digital age.

The speakers also emphasised the need for AI products and services to be mindful of both global and local contexts. They argued that AI systems should take into account the specific linguistic needs and cultural nuances of the communities in which they are used. The speakers raised concerns about the existing bias in AI systems that are designed with a focus on the global north, neglecting the unique aspects of local languages and cultures. They stressed the importance of addressing this issue to bridge the digital divide and ensure that AI is fair and effective for all.

Digital literacy was highlighted as a tool for decolonising the internet. The speakers provided examples of how digital literacy has empowered individuals, particularly women in Kenya, to use digital tools for their businesses. They highlighted the importance of finding people where they are and building on their existing skills to enable them to participate more fully in the digital world.

One of the noteworthy observations from the discussion was the need to break down complex information, such as terms and conditions, to ensure that individuals fully understand what they are agreeing to. The speakers noted that people often click on “agree” without fully understanding the terms and emphasised the importance of breaking down the information in a way that is easily understandable for everyone.

Overall, the discussion emphasised the need to decolonise the digital future by placing people at the centre of technological advancements and promoting a rights-based democratic digital society. This involves creating inclusive tech solutions, collaborating with stakeholders, considering cultural context in training programmes, ensuring accessibility and affordability, addressing privacy concerns, and bridging the digital divide through digital literacy initiatives. By adopting these approaches, it is hoped that technology can be harnessed for the benefit of all and contribute to more equitable and inclusive societies.

Shalini Joshi

The analysis highlights several important points related to artificial intelligence (AI) and technology. Firstly, it reveals that AI models have inherent biases and promote stereotypes. This can result in inequalities and gender biases in various sectors. Experiments with generative AI have shown biases towards certain countries and cultures. In one instance, high-paying jobs were represented by lighter-skinned, male figures in AI visualisations. This not only perpetuates gender and racial stereotypes but also reinforces existing inequalities in society.

Secondly, the analysis emphasises the need for transparency in AI systems and companies. Currently, companies are often secretive about the data they use to train AI systems. Lack of transparency can lead to ethical concerns, as it becomes difficult to assess whether the AI system is fair, unbiased, and accountable. Transparency is crucial to ensure that AI systems are developed and used in an ethical and responsible manner. It allows for scrutiny, accountability, and public trust in AI technologies.

Furthermore, the analysis points out that AI-based translation services often overlook hundreds of lesser-known languages. These services are usually trained with data that uses mainstream languages, which results in a neglect of languages that are not widely spoken. This oversight undermines the preservation of unique cultures, traditions, and identities associated with these lesser-known languages. It highlights the importance of ensuring that AI technologies are inclusive and consider the diverse linguistic needs of different communities.

Additionally, the analysis reveals that women, trans people, and non-binary individuals in South Asia face online disinformation that aims to marginalise them further. This disinformation uses lies and hate speech to silence or intimidate these groups. It targets both public figures and everyday individuals, perpetuating gender and social inequalities. In response to this growing issue, NIDAN, an organisation, is implementing a collaborative approach to identify, document, and counter instances of gender disinformation. This approach involves a diverse set of stakeholder groups in South Asia and utilises machine learning techniques to efficiently locate and document instances of disinformation.

The analysis also highlights the importance of involving local and marginalised communities in the development of data sets and technology creation. It emphasises that hyperlocal communities should be involved in creating data sets, as marginalised people understand the context, language, and issues more than technologists and coders. Inclusive processes that include people from different backgrounds in technology creation are necessary to ensure that technology addresses the needs and concerns of all individuals.

In conclusion, the analysis underscores the pressing need to address biases, promote transparency, preserve lesser-known languages, counter online disinformation, and include local and marginalised communities in the development of technology. These steps are crucial for creating a more equitable and inclusive digital world. By acknowledging the limitations and biases in AI systems and technology, we can work towards mitigating these issues and ensuring that technology is a force for positive change.

Pedro de Perdigão Lana

The analysis highlights several concerns about Internet regulation and its potential impact on fragmentation. It argues that governmental regulation, driven by the concept of digital colonialism, poses a significant threat to the Internet. This is because such regulations are often stimulated by distinctions that are rooted in historical power imbalances and the imposition of laws by dominant countries.

One example of this is seen in the actions of larger multinational companies, which subtly impose their home country’s laws on a global scale, disregarding national laws. For instance, the Digital Millennium Copyright Act (DMCA) is mentioned as a means by which American copyright reform extends its legal systems globally. This kind of imposition from multinational companies can undermine the sovereignty of individual nations and lead to a disregard for their own legal systems.

However, the analysis also recognizes the importance of intellectual property in the discussions surrounding Internet regulations. In Brazil, for instance, a provisional measure was introduced to create barriers for content moderation using copyright mechanisms. This indicates that intellectual property is a crucial topic that needs to be addressed in the context of Internet regulations and underscores the need for balance in protecting and respecting intellectual property rights.

Another important aspect highlighted is platform diversification, which refers to the adaptation of platforms to individual national legislation and cultural contexts. It is suggested that platform diversification, particularly in terms of user experience and language accessibility, may act as a tool to counter regulations that could lead to fragmentation of the Internet. By ensuring that platforms can adapt to different national legislations, tensions can be alleviated, and negative effects can be minimized.

Pedro, one of the individuals mentioned in the analysis, is portrayed as an advocate for the diversification of internet content and platforms. Pedro presents a case in which internet content-based platforms extended US copyright laws globally, enforcing an alien legal system. Thus, diversification is seen as a means to counter this threat of fragmentation and over-regulation.

The analysis also explores the concern of multinational platforms and their attitude towards the legal and cultural specificities of the countries they operate in. While it is acknowledged that these platforms do care about such specifics, the difficulty of measuring the indirect and long-term costs associated with this adaptation is raised.

Furthermore, the discrepancy in the interpretation of human rights across cultures is highlighted. Human rights, including freedom of expression, are not universally understood in the same way, leading to different perspectives on issues related to Internet regulation and governance.

The importance of privacy and its differing interpretations by country are also acknowledged. It is suggested that privacy interpretations should be considered in managing the Internet to strike a balance between ensuring privacy rights and maintaining a safe and secure digital environment.

The analysis concludes by emphasizing the need for active power sharing and decolonization of the digital space. It underscores that preserving the Internet as a global network and a force for good is crucial. The failure of platforms to diversify and respect national legislation and cultural contexts is seen as a factor that may lead to regional favoritism and even the potential fragmentation of the Internet.

In summary, the analysis highlights the concerns about Internet regulation, including the threats posed by governmental regulation and the subtle imposition of home country laws by multinational companies. It emphasizes the importance of intellectual property in the discussions surrounding Internet regulations, as well as the potential benefits of platform diversification. The analysis also highlights the need for active power sharing, the differing interpretations of human rights, and considerations for privacy. Overall, preserving the Internet as a global network and ensuring its diverse and inclusive nature are key priorities.

Moderator

The analysis delves into the various aspects of the impact that AI development has on human labour. It highlights the heavy reliance of AI development on human labour, with thousands of workers involved in activities such as collection, curation, annotation, and validation. However, the analysis points out that human labour in AI development often faces precarious conditions, with insufficient arrangements regarding pay, management, and collectivisation. Workers frequently encounter issues like low pay, excessive overwork, job strain, health problems, short-term contracts, precarity, unfair management, and discrimination based on gender, race, ethnicity, and geography. This paints a negative picture of the working conditions in AI prediction networks, emphasising the need for improvements.

The distribution of work for AI development is another area of concern, as it primarily takes place in the Global South. This not only exacerbates existing inequalities but also reflects the legacies of colonialism. Large companies in the Global North hire and develop AI technologies using a workforce predominantly from the Global South. This unbalanced distribution further contributes to disparities in economic opportunities and development.

The analysis also highlights the influence of digital sovereignty and intellectual property on internet regulation. It argues that governments often regulate the internet under the pretext of digital sovereignty, which extends the legal systems of larger nations to every corner of the globe. This practice is justified through the concept of digital colonialism, where multinational companies subtly impose alien legislation that does not adhere to national standards. Intellectual property, such as the DMCA, is cited as an example of this behaviour. To counter this, the analysis suggests that diversification of internet content and platforms can be an essential tool, safeguarding against regulations that may result in fragmentation.

Furthermore, the analysis emphasises the need for documentation and policy action against gender disinformation in South Asia. Women, trans individuals, and non-binary people are regularly targeted in the region, with disinformation campaigns aimed at silencing marginalised voices. Gender disinformation often focuses on women in politics and the public domain, taking the form of hate speech, misleading information, or character attacks. The mention of NIDAN’s development of a dataset focused on gender disinformation indicates a concrete step towards understanding and addressing this issue.

Digital literacy and skills training are highlighted as important factors in bridging the digital divide and empowering marginalised communities. The analysis emphasises the importance of democratising access to digital education and ensuring that training is relevant and contextualised. This includes providing practical knowledge and involving the user community in the development process. Additionally, the analysis calls for inclusive digital training that takes into consideration the needs of persons with disabilities and respects economic differences.

The analysis also explores the broader topic of decolonising the internet and the role of technology in societal development. It suggests that the decolonisation of digital technologies should involve not only the use of these technologies but also the production process. There is an emphasis on the inclusion of diverse perspectives in technology creation and data analysis to avoid biases and discrimination. The analysis also advocates for the adaptation of platform policies to respect cultural differences and acknowledge other human rights, rather than solely adhering to external legislation.

In conclusion, the analysis provides a comprehensive assessment of the impact of AI development on human labour, highlighting the precarious conditions faced by workers and the unequal distribution of work. It calls for improvements in labour conditions and respect for workers’ rights. The analysis also raises awareness of the need to document and tackle gender disinformation, emphasises the importance of digital literacy and skills training for marginalised communities, and supports the decolonisation of the internet and technology development. These insights shed light on the challenges and opportunities in ensuring a more equitable and inclusive digital landscape.

Session transcript

Moderator:
Hi, good morning, good afternoon, and good evening to all those who are joining us on site or online. Welcome to our workshop. This workshop is called Decolonize Digital Rights for a Globally Inclusive Future. Before we begin, I would like to encourage both on site and remote participants to scan the QR meter, the code that’s just on the screen here. And, you know, the link is being published on the Zoom right now to express your expectations for the session. And as a reminder, I would also like to request that all the speakers and the audience whom we ask questions during the question and answer round to please speak clearly and at a very reasonable pace. I would like to also request that everyone participating to maintain a respectful and inclusive environment in the room or in the chat. For those who wish to ask questions during the question and answer sessions, please raise your hands. And once I call upon you, if on site, please take the microphone at this the left or the right side. Clearly state your name, the country you come from, and then you can go ahead and ask the question. Additionally, please make sure that all mics are muted and other devices and other audio devices are also muted just to avoid disruptions. If you have any questions or comment or would like the moderator to read out your questions or comments online, please type it in the Zoom chat. And when posting, please start and end your sentence with a question mark to indicate whether it’s a question or a comment. Thank you. We may now begin our session. So, thank you for joining the session, whether you are online or on-site, and it is going to be a very thought-provoking session that is going to delve into decolonisation of the Internet. I am Mariam Job, and I will be your on-site moderator for today’s session. Online, we have Nelly, who is going to be the moderator, and we have Keolu Bojil, who is going to be a reporter for today’s session. Keolu Bojil, my sincere apologies if I miss the pronunciation of the name, he is going to be a reporter for the session. Today, we are gathered here to confront the very uncomfortable truth that the Internet is a space where everyone is not always equal, and it is very far from being a place where everyone is equal. In the U.K., there is a very strong idea that all groups are equal because of historical bias and power imbalances. Maginalised groups continue to face barriers in the creation and design of technology. This often results to digital colonialism, and the dominance of privileged groups and shaping technology design often leads to discrimination and discrimination in the U.K. and in the global south, and perpetuating linguistic bias and slower content removal from non-English content, regardless of the magnitude of hate or harm. The unequal response to these strategies, however, further highlights the disparity. While future such as safety check and on one click option to ensure that the U.K. is safe, the U.K. is safe, and the U.K. is safe, the U.K. is safe, and the U.K. is safe, and the U.K. is safe, and the U.K. is safe. While platforms have introduced fact-checking measures for major elections in the west, misuse of data and information on the U.K. information and disinformation, whereas some platforms continue to plague the global south. However, the under-representation of authors of color on online knowledge platforms paints a stark picture of the inequalities that persist. Even voice assistants designed to assist and interact with users have been found to reinforce gender biases, normalize sexual harassment, and perpetuate conversational behavior patterns imposed on women and girls. This not only limits their autonomy, but also puts them in the forefront of errors and biases. Hate speech targeting marginalized communities continues to rage online, creating a very unsafe environment for those from the global south and those from the marginalized communities. Users in the global south also have the right to feel safe and to feel the same autonomy as users in the global north. In this workshop today, we are going to delve into the concept of decolonization in relation to the internet, in relation to technology, and human rights and freedoms online. Our esteemed panelists, who will be joining us too, we have two on-site and we have online panelists as well, they will unpack the evidence that exists of gender stereotypes, linguistic bias, and racial injustice that are coded into technology. They will shed light on how apps are often built based on creators opinions of what the average user should or should not prefer. Furthermore, they will also offer recommendations on how online knowledge can be decentralized and how ideological influences can be delinked from the digital arena. They will propose practices and processes that can help decolonize the internet and transform it into a truly global, interoperable space. Throughout the sessions, we’re going to address three policy questions. One is that, what are the colonial manifestations of technologies such as language, gender, media, and artificial intelligence? in the digital domain, and how do we address internet discrimination against people of color and the intersectionality that exists in social intelligence that are emerging on the Internet. Two is that how do we address these legacies that shape the Internet and have become the ongoing chron verklzing, and determine its future. How do we address the intersectionality that exists in the Internet? How do we address the intersectionality that exists in our technology and the digital arena as a whole? How can we better include marginalized communities in these discussions? We hope that by attending the session, participants will get a deeper understanding to the context of decolonization in relation to the Internet and will learn to recognize the ways in which bias is built into our technology and understand how we are supporting discrepancy in IT services. And fourth, we hope to lead a connection into code, drawing data from actors’ beliefs and systems that perpetuate stereotypes and historical prejudice. During the session, we hope and aim to have a conversation of how we lead to ensure that we de-colonize technology and digital space and pave a way for a more inclusive process related to endless mainstreaming and migration and transgenic activities in Africa. Today we’re going to be hearing a number Let me introduce you to our students with the dual and security manager and presenter for the youth scale training team in the digital resource space. Linnea Ayaligus is … and is involved in addressing this She’s also the co-founder of Khabar, India’s only independent digital rural news network. And one of the next speaker is Ananya, who’s here with us in person. She’s the youth advisor of USAID’s Digital Youth Council. Over the recent years, Ananya has been very active in the Global Digital Development Forum and has also been a Next Generation ICANN ambassador over the 64, the ICANN 64, ICANN 68 and ICANN 76. Ananya holds a master’s degree in development and labor studies from Jawaharlal University, New Delhi. This is Ananya. We have Pedro, who is joining us online as well. He is an innovation lawyer at Systema Industry, has a PhD student at UFBR with an LM, with a LMM from the University of Coimbra. He is a board member of IODA, ISOC Brazil, and the Creative Commons Brazil, and an organizer of the Youth LACIF. We have Marinda, who’s here with us in person. Tevin, he is from GIZ and is a tech and policy lawyer based in Nairobi, Kenya. He holds a data governance, he heads rather, he heads the data governance division at the DTC GIZ Kenya, and previously he worked as a data protection advisor at GIZ. He also serves as the secretary of the Kenya Privacy Professional Association. And with that, we begin the session today. We will start with Jonas, who’s joining us online to have a brief presentation. Yes, Jonas, are you with us?

Jonas Valente:
Yes, yes, good afternoon. Can I get the possibility of sharing my screen?

Moderator:
Yes, please, let me see that. Can you see it?

Jonas Valente:
Yes, yes, we can see your screen now..Okay, thank you so much. Good afternoon for you. Good morning for me in London right now. And good morning even more for Pedro in Brazil. So it’s an honor for us from the Fair Work Project. to join this panel. I’m going to talk about the labor conditions in AI global prediction networks. And this is super important because normally, we look in the digital rights community to the effects of technologies like AI, but we need to look also to the workers who are producing that. So the first assumption is that AI development and deployment is super dependent on human labor. And unfortunately, this human labor is characterized by a set of features that make it very precarious and with very, let’s say insufficient arrangements regarding a set of conditions like pay, management and collectivization. When we talk about data work, we talk about activities like collection, curation, annotation, validation, and throughout all these chain, you have human labor. So when we talk about artificial intelligence, it’s important to know that it’s not so artificial. So we need like thousands of workers and those thousands of workers are distributed all around the world. But this distribution is not random or neutral. This distribution express the legacies of colonialism. When we have big companies in the global North who are hiring and developing these technologies and a workforce mainly in the global South, we can see here how the main countries are India, Bangladesh, Pakistan. We also have a workforce in the United States or the United Kingdom, but mainly global South countries are taking part in this through business process outsources or digital labor platforms. The Fair Work Project assesses digital labor platforms against a set of principles. And we try to address the risks of platform work and the platform economy and which risks are that. First of all, a risk of low pay. Our Cloud Work Report launched this year showed how. micro workers earned around two dollars an hour and other reports and studies show the same. So of course when you’re talking about some countries considering the currency this may be not so bad but what the studies are showing is that those payments structures and payment amounts they are super insufficient to ensure like adequate and meaningful livelihoods. Another problem is the excessive overwork and job strength. So this leads to health issues. We have workers working 15 16 hours. Normally workers need to switch day by night because they need to be awake during the global north time instead of being awake in their own country time and this leads to exhaustion leads to problems related to sleep and very other mental health questions that we’ve been found finding in our studies. Also workers suffer with short-term contract and precarity. So normally if you have a business process outsourced you have a one month or a two month contract and when we mention cloud work platforms you don’t have a contract in a traditional sense and these workers need to search for tests all the time. Our 2022 report showed that those workers worked eight hours on unpaid tasks and once again this is a legacy that we see of colonial and capitalist regimes and work arrangements. Those workers suffers with unfair management and especially with discrimination and you can see this discrimination based on gender based on race and ethnicity and based on geography and we can see the legacies of colonialism. Now there’s also data data work workers they face this personalized as bully, and they are subject to extreme surveillance. And finally, another risk is the lack of collective power. And of course, that this turns into more asymmetries between workers and platforms. The Fair Work Project, it’s working across all over the world, almost 40 countries. It’s coordinated by the Oxford Internet Institute and the Vis-a-vis Social Center in Berlin, and funded mainly by JSF, connected to the German government. We are assessing location-based platforms, cloud work platforms, and AI. And we have this five principles, pay, sorry about that. We have this five principles, pay conditions, contract managements, and representations. We collect data from different sources, and we rank platforms, and to finish, our AI project is looking at-

Moderator:
Jonas, please help us round up.

Jonas Valente:
Yeah, I’m rounding up, but this is the last slide. We are assessing specific AI companies, and when we try to do that, we try to show that the platform economy can be different, and to be different is part of the decolonizing process of AI technologies. Thank you so much.

Moderator:
Thank you so much, Jonas. That was quite insightful with data to back it up, that we could actually look at the fact that these are concerning issues when it comes to the decolonization of the internet. We’re gonna take another five-minute presentation from another of our online speakers, Shalini. Please go ahead and share your presentation.

Shalini Joshi:
Thank you. I don’t have a presentation, but I made some points for the discussion today. Thank you very much to IGF. Thank you to the organizers of this workshop. It’s a real honor to be here. I’m going to talk about the problems with AI in terms of gender, in terms of language, and I’m also going to talk about the work that Mee Dan, the organization that I work with. work with has been doing in order to address some of these issues. So as we all know that there have been experiments that have been carried out with generative AI on how different image generators visualize people from different countries and cultures. And when we look at these images, they almost always promote biases and stereotypes related to those countries and cultures. When text-to-image models were prompted to create representations of workers for high-paying jobs and low-paying jobs, high-paying jobs were dominated by subjects with lighter skin tones and were mostly male-dominated. Images that we see don’t represent the complexity and the heterogeneity and diversity of many cultures and people. We also know that AI models have inherent biases that are representative of the data sets that they are trained on. Image generators are being used for several applications and many industries and even in tools that have been designed to make forensic sketches of crime suspects. And this can cause real harm. A lot of the models that are used tend to assume a western context and the AI systems look for patterns in data on which they are trained, often looking at trends that are more dominant. And they are also designed to mimic what has come before, not create diversity. So we’re talking about inclusivity in technology. How do we ensure that AI technology is fair and representative, especially as more and more of us start using AI for the work that we are doing? Any technical solutions to solve for such bias would likely have to start with the training data that is being used. And to seek transparency from AI systems and from the companies that are involved is also really important. Because very often, these companies are very secretive about the data that they use to train their systems. There’s also the issue of language. Often, AI models are trained with data that uses mainstream languages. Often, these are languages of the colonizers. Many AI-based translation services use only major languages, overlooking hundreds of lesser-known languages. And some of these are not even lesser-known languages. So languages such as Hindi and Bengali and Swahili, which are spoken a lot by people and by many people, they also need more resources to develop AI solutions. And from a sociocultural standpoint, preserving these languages is vital, since they hold unique traditions, knowledge, and an entire culture’s identity, while protecting their richness and language diversity. So in this context, what is it that we are doing at Midan, the organization that I work with? We are a technology nonprofit. Over the last 10 years, as the internet has evolved and changed, Midan has maintained a unique position as a trusted partner and collaborator, working both with civil society organizations and with technology companies that harness the affordances of digital technology to communicate. Our approach has been consistent. We build collaborations, we build networks, and we build digital tools that make it easier for hyperlocal community perspectives to be integrated into how global information challenges are met. We understand that our ability to work across community technology and policy stakeholders is a privilege. And this is our unique contribution. We see ourselves as facilitators and enablers of change. And we do this by developing open source software that incorporates the state-of-the-art ML and AI technologies by building coalitions. A lot of these coalitions are built around large events, such as elections, that enable skills sharing and capacity building. And this multi-pronged approach strengthens collaboration and the ability for hyperlocal community perspectives and participation in addressing.

Moderator:
Thank you so much for that, Shalini. Thank you. That was quite insightful to learn about the work that you do and how the methods and codes in our technology and our internet that has been existing for as long as we’ve been using the internet. And if we don’t tackle them, if we don’t talk about them, if we don’t even realize that these stereotypes, these gender biases are coded into our internet and the way that we use it to digital technologies, we have a long way to go when it comes to decriminalizing the internet. We’re going to take another five-minute presentation or speech from another of our speakers. This one is on site. But before we do that, I would like to share some of the comments that we made about the expectations of the session. We see that people are expecting reflections, candid direction, articulation, radical, honest manifestations. Of course, the link is still on the Zoom chat. So if you would like to include your expectations, you may still go ahead and make the comment. Ananya, you may go ahead. ahead, please.

Ananya Singh:
Thank you so much. First of all, let me begin by saying that I’m very happy to be here in Japan. And no, it’s not just because Japan is such a beautiful country and the people here are so nice. I mean, well, I mean, of course they are. But also because I can finally live a day where I do not get spammed by calls from a range of companies trying to sell me their products, a bunch of coaching centers trying to send me to their engineering institutions with the aid of their tutors. By the way, I have a master’s degree in development studies, so engineering was clearly never my choice. Random call center agents forcing me to invest in certain deals or just another customer support automated call trying to divert my attention from my work. The one question that always comes to my mind when my phone rings and the Truecaller app detects it as a spam call is how did they get my number? Who gave them my number? And why did they give it to them? Why was I not asked? Given that it is my number and my number is connected to very obviously a ton of different data related to me and since I own both the number and any data related to that number, I should have been asked. But I wasn’t. And I’m sure we are all very familiar with those lottery emails. Come on, we have a dedicated spam folder where all those great deals and gone in a day bumper offers and their likes of ad emails keep lurking. So how did they choose you or me? I mean, I have never been that lucky in my entire life, by the way. So who gave them our email address? And if they found our email addresses, are they going to be very far from our residential addresses or our bank account numbers? So the way we live our lives has become excessively dependent on virtual and online activities and even more so after the pandemic. For instance, social media. inbuilt GPS, health apps, taxi apps, Google searches, everything, all of them require access to our personal data. Our details set to public or private are available for usage by online companies. The principal actors here capture our everyday social acts, translate them into quantifiable data, which is analyzed and used for the generation of profit. In the book, The Costs of Connection, the authors Nick Coldray and Ulysses Meijas also reiterate this view by emphasizing that instead of natural resources and labor, what is now being appropriated is human life through its conversion into data, meaning our online identities have become a commodity which can be exploited and used for capital gains, controlling our time and usage and influencing important decisions or processes in our lives. Hence the term data colonialism. But I know some people do contest the usage of the term data colonialism because historically, colonialism is unthinkable without violence, the takeover of lands and populations by sheer physical force. That’s true. But let’s take the example of the Spanish empire’s requerimiento or the demand document. It was made to inform the natives of the colonists’ right to conquest. Confiscators read this document out, demanding the natives’ acceptance of their new conditions in Spanish, which no local understood. Now think of the terms of service we sign up to every time we join a platform. They’re often unclear, long, full of jargons, which we rarely have the time to read, and so automatically, almost like a reflex, we click on, I agree. But do we really agree? Unknowingly, we are giving consent to being tracked online, being called at odd hours to be sold insurance policies for the children. By the way, I don’t have it. And hence our ignorance, our implied or uninformed consent for these kinds of data collection provides a very valuable yet free raw material. data. Once a senior official from a very famous company stated that data is more like the sunlight than oil, meaning a resource that can be harvested sustainably for the benefit of humanity. But this very idea makes my personal data a non-excludable natural resource available for public use. But does it not contradict the very word personal in personal data? Okay, I’ll leave you with that..

Moderator:
Thank you. She’s the only person who’s been on time since this session started. Thank you, thank you very much for that Ananya. We’re gonna take a five-minute as well, a five-minute presentation from Pedro who’s joining us online. Pedro, are you online? Yes, yes, we can hear you.

Pedro de Perdigão Lana:
That’s great. So good afternoon everyone. I hope you’re all well. I’m greeting you from a 4 a.m. pre-holiday morning here in Brazil. But to get to the presentation, what I want to comment on with you today during the session, yes just let me put a time here, there we go, is the results of a research project funded by a Latin program focused on youth named Lideres 2.0. It is an amazing program with many interesting and diverse phases and I recommend you all to seek more information about it, maybe as a way to repeat the idea in your regions. And for the sake of time, back to the real content of the presentation. The idea here is simple, linking sovereignty, fragmentation, regulation as a reaction and the theme that I try to force into everything that I research, intellectual property. So governmental regulation is probably one of the most important threats we have to the internet when we are talking specifically about the dangers of fragmentation, but it’s important to see what is behind this regulatory proposal, or to be more precise, what serves as justification for these movements. The argument that I will try to put forward here is that even when this is not the real reason that motivates public authorities, especially when I’m talking about authoritarian ones, hard regulation based on digital sovereignty arguments is frequently stimulated by distinctions that are originated in what we call digital colonialism, be it from multinational tech companies or countries who have much more steering power on modeling the internet than others, even if that’s not implemented in such a direct and explicit manner. We can see this when those larger multinational companies end up extending the legal systems of their home countries to every corner of the globe, subtly imposing alien legislation even when it doesn’t follow the standards of the national laws that actually apply. This is where intellectual property comes in. The Digital Millennium Copyright Act, or DMCA, a result of the copyright reform for the Information Society in the USA, establishes systems of notification and counter-notification and other mechanisms that are severely favorable to the rights holder, the copyright rights holder, and the largest content-based platform seems to have repeated those systems all over the planet, sometimes, of course, with great support from the international lobby of the American entertainment industry. Similarly, when I go to a Brazilian page, for example, that responds to allegations of copyright infringements on these content-based platforms, I will almost always see explanations on how Fair Use works, which is an institute that simply doesn’t exist in the Brazilian legal system, since this is a country that adopts a system of limitation exceptions for permitted users of copyrighted works. Of course, this example maybe seems strange to some. So, how many people actually care about intellectual property when compared to discussions such as disinformation or freedom of expression? But apart from the effects that all these areas are umbilically linked. In Brazil, for example, we even have a provisional measure which is something like an executive bill that intended to create obstacle for content moderation through copyright mechanisms. The most important point here is just to simplify a much broader behavior that attracts a lot of negativization and may be instrumentalized by ill-intended actors. If a national platform doesn’t even care about conveying an image that will follow something as central to the idea of sovereignty as national legislation, you can only imagine what a foot plate this is for movements that want to showcase the transnational interactions that are made possible through the internet as something dangerous or something that needs to be controlled. Summing this up, internet content and platform diversification, we’re talking about user experience and language accessibility, et cetera, is not the same as fragmentation. Not only that, it’s not just not only the same, but this diversification if platform is actually adapting to certain cultural contexts may actually be an important tool against pushes for regulation that may result in fragmentation. So back to you.

Moderator:
Thank you, thank you so much for that, Pedro. That was quite insightful. And now we’ll take our last opening remark from Tevin from the Kijazi, Kenya. Yes, it’s working now.

Tevin Gitongo:
Okay, good afternoon, everyone. So my name is Tevin Mwenda Gichonga. And I think we’ve had quite a number of presentations, and mine is going to take a different tangent. Mine is going to show you how we are trying to decolonize the digital future. So we’ve had all the things that are happening, and sometimes it sounds scary. So ours is more of let’s try and actually solve it. Let’s put our money where our mouth is. And I’m going to make a short presentation of the project that we’re working on at GIZ. As you’ve heard, I work for GIZ Kenya under the Digital Transformation Center, which is a project supported by the German government and Team Europe, working together with the Kenyan government, specifically the Ministry of ICT. And in our own little way, I can’t say we are perfect, but we are trying to see how we can do this with different aspects. One thing we must recognize is that I know we’ve had a lot of presentations on AI, but when decolonizing the digital rights future, it’s just not AI only. It has to be every other facet as well that builds up to the AI. And that’s what we are trying to do in our own small way. So the project, as you can see, the objective is to support Kenya’s digital transition towards a sustainable and human-centered digital economy. And I’m going to look at two, there are three visions and missions, but I’m going to look at two major ones that affect this panel. The first one is we recognize that we must make technology work for people. And throughout the presentations you’ve had, that’s maybe where we are really going wrong, particularly in developing countries. It’s the technology being made at some point maybe is not working as ideally it was intended. The other one is to enable a rights-based and democratic digital society. So we really have to be aware of that. And so what approach did we decide to take with this, I can say, interesting experiment, is on one hand to leapfrog Kenya’s digital economy. We decided the first thing we’re going to do, and this is working together. So, I’m going to give you a brief overview of what we’re doing. So, the first thing we’re doing is we’re working with the local digital innovation ecosystem to build capacities on data protection and IT security, to foster a data-driven economy, and to work towards a decent job creation in gig economy. And all this actually build up together to enable that. The other thing that we’ve done is to build Kenya’s digital society, and this is exploring emerging tech like AI. So, we’re building a digital society, and we’re building capacities on data protection. So, I’m going to show you an example. We’re digitalizing public services, but in a user-centric way so we don’t leave anyone behind, and building capacities on data protection. And also, we focus on bridging the digital divides, and we do this by ensuring no one is left behind. So, the youth, women, rural, urban, and also persons with disabilities. So, what the approach we took is, as you can see there, what you see on the side are all our stakeholders. So, we’re not just a technology company, we’re also a digital company, and we’re trying to make sure that we’re building the capacity of an IGF in practicing, in working in everyday work that we do, because at any one moment, like in my work, I deal with all those stakeholders, because we recognize that fact. One of the best ways to actually achieve a future where you can digitalize digital rights would be you leave no one behind. So, we have governments in our teams, we have private sector, we have civil society, and we have academia. We have the big, the big, the big, the big, the big tech companies, and we have the tech companies. So, we have in our team a team of about sixty to seventy people, and we’ve got two major ones, they’re quite a number. The ones that are relevant to this. So, the first one was a study on human-centered cyber security approach. So, if you know Kenya, we are known as a fintech powerhouse in terms of the work that we do there, but out of that, we’ve got a couple of other things that we’ve done. The other thing that we’ve done is data protection and privacy from a gender perspective, and I think that’s important, because… We always forget that the most vulnerable groups, particularly when it comes to data protection, in most cases are women. So we decided to look at data protection and privacy from a gender perspective and how to enable participation online. The next thing that we did was I’m going to jump to our other, yes, strengthening, strengthening gig workers, right? So every year we publish a report where we rank digital labor platforms and under the ILO Fair Work principles and how are they performing. And the other one when it comes to AI and leaving no one behind, maybe the one that I’m always excited about is building local solutions. And one of the things that we did, for example, working with Kenyan actually, Kenyan entrepreneurs and Kenyan coders, was we are now creating chatbots. The ones, the versions that you see of open AI, but these ones now are locally created. They’re able to speak English, Swahili, a version of English and Swahili. And in that way some of these products that are created are kind of geared towards the persons and they’re able to help. So that’s just, and also in relation to PWDs, we developed the first-ever continental-wide ICT accessibility standards. So they’re just some of the few ways that we are trying to, I can say, decolonize digital rights. And I was just showing an overview of it all. Thank you very much.

Moderator:
Thank you. Thank you very much for that, Tevin. I think, you know, our collective efforts are always very well needed in this kind of issues. And our panelists have shed light on the concept of decolonization in relation to the Internet technology and human rights and online freedoms. I think it’s time that we engage in discussions that goes deeper into these. concepts and explore the synergies and trade-offs that are involved. Our objective really is to understand how we can harness these innovations and these issues to, you know, responsibly create something more sustainable and equitable for a global inclusive digital future. I would now like to ask, we would start with Jonas who is online. Jonas, what are some of the ways in which cheap labor from the global south powers contemporary digital products and services?

Jonas Valente:
Cheap labor is key for all AI development and this is why lots of companies are using digital labor platforms because these digital labor platforms, they circumvent the social protections and digital labor rights, basic digital labor rights, and sometimes we’re talking about the 19th century rights like minimum wage or freedom of association and using that those companies can benefit from this cheap labor and those workers unfortunately are not being compensated, do not have health and safety protection measures, and don’t have the rights that we talk about as once again from the 19th century to the 20th century and unfortunately this is becoming a rule in the data services global value chains including AI and that’s why we need to address this issue and talk about how to ensure those labor rights to workers all around the world but focusing specifically on what’s happening in the global south.

Moderator:
Thank you Jonas. I have a question for Shahily but before I go on to that I have a follow-up question to you Jonas. Why are these conditions so bad and how is it that the Fair Work project, how is the Fair Work project working to improve them? Jonas you have the floor. Thanks to you. Why are these conditions so bad and how is the Fair Work Project that you’re working on working to improve them?

Jonas Valente:
Currently, so far, the regulatory efforts, they are only addressing on-location platforms.

Moderator:
Okay, we’re talking about the Internet Governance Forum and, you know, we’re having internet issues online. Okay, so we’re going to go ahead and move to Shalini, since there’s internet blockage over there. Shalini, you mentioned some of the work you do at MENA during your opening remarks, and what forms does online hate and falsehood take while it’s present in the APAC region?

Shalini Joshi:
Thanks. I’m going to focus on the issue of gender in the Asia-Pacific region, and I’m going to focus on South Asia. So women, trans people, non-binary people in South Asia are regularly targeted with online disinformation, and this disinformation is propagated in an attempt to silence already marginalized individuals and make it difficult for them to safely participate in public discourse. Much of the work on gender disinformation covers women in politics and those in the public domain. Research also shows the narrow definitions of gender disinformation and the current focus on women public figures are sometimes sidelining affected girls and women and gender minorities who do not have a public presence. Gender disinformation, as we know, can take many forms. That includes hate speech, intentionally misleading information. and rumors, attacks on the character and affiliations of people, and attacks on private and public lives of people, which impacts people in a way that they are either self-censoring or removing their social media contents or living in hiding. There are direct and indirect threats to their lives, and also generally enforcing stereotypes of vulnerability. So what we’re trying to do at NIDAN is that we are developing a data set on instances of gender disinformation to build more evidence for supporting research and policy action. And we have brought together a diverse set of stakeholder groups in South Asia to work collaboratively to define gender disinformation from a South Asian perspective, to identify, document, and annotate a high-quality data set of gender disinformation and hate in online spaces for better understanding and countering the issue. We’re going to use machine learning techniques in the process. And as we document more instances of gender disinformation online, we feel that the technology that we use will also become better at locating additional content and thereby creating a virtuous cycle.

Moderator:
Thank you, Shalini. Thank you for that. When you started answering the question, I was going to make a follow-up question about some of the best practices and measures that you guys have taken in place, put it in place rather, to counter online hate that target marginalized communities. And with regards to your context, you’re talking about women. But you answered that when you were talking about the data set that you guys are developing. So, thank you for that. Ananya, you talked about, when you were making opening remarks, you talked about data, a lot about data and how really it’s affected. It’s the key, it’s oil. And so, what are some of the implications of data colonialism and surveillance capitalism on digital rights? And how can individuals and communities really reclaim control over their personal data that they sometimes are not even aware that they’re giving out? And how do they protect their privacy in the digital realm?

Ananya Singh:
Yes, apparently it’s no longer oil, but it’s sunlight. Well, historically, the era of colonialism ushered in by boats that came to the new world to expand empires through infrastructure building and precious metals extraction. Now, like every other thing, colonialism is also going digital. It establishes extensive communication, networks like social media and harvest the data of millions to influence things as simple as advertising and as critical as elections. Data colonialism justifies what it does as an advance in scientific knowledge, personalized marketing or rational management, just as historical colonialism claimed itself to be a civilizing mission. But yes, some things have changed. If historical colonialism annex territories, their resources and the bodies that worked on them, data colonialism’s power grab clusters around the capture and control of human life itself through appropriating the data that can be extracted from it for profit. Data colonialism is global, playing out in both the global north and the global south, dominated by powerful forces in both the east and the west. Unfortunately, regardless of who directs these practices or where these practices take place, they often lead to the erosion of privacy rights such as individual’s personal data is. collected, and analyzed, and used without their knowledge or explicit or informed consent. And like you saw in the example that I gave you about the spam calls I get, there is little to no retracement mechanism. I mean, yes, I can block and report, but can I live happily ever after? No. Because there will be yet another company which has actually employed another spammer waiting to call me again to sell their policies. My data, your data, are now in the hands of so many people that it is going to be extremely difficult for us to individually trace and then erase our data. Hence this will ultimately result in a loss of autonomy and control over our own personal information. While our data may be widely dispersed, the power to capture and control our data continues to remain concentrated in the hands of a few. This can lead to a lack of transparency, accountability, and democratic controls over data practices, potentially undermining individuals’ rights and freedoms. The collection and analysis of personal data can perpetuate existing inequalities like some of my able panelists have already mentioned. Training emerging technology on biased data can lead to biases in algorithms, unfair targeting, exclusion, discrimination, and the list goes on. These practices can also be used to manipulate and influence individuals’ behaviors, opinions, choices, threatening individuals and democracies. We have seen that happening already. Undeniably, ideologies such as connection, building communities, personalization will keep incentivizing corporations to collect more of our personal data. Hence the only way to prevent data colonialism from further expanding is to question these very ideologies. Individually, we must prioritize data minimization, like be mindful of the information we share online or limit the amount of personal data we share. technology platforms. I personally do this by limiting my social media presence, which by the way is very good for your mental health as well. I like to call this digital minimalism. Further, think twice before you agree to the terms and conditions. While it is easy to be fatigued by the almost incomprehensibly long document written in complicated language, take time to think before giving into an impulse of clicking on I agree. So I’ll stop with that because I don’t want to take more time than I have been allocated. Thank you..

Moderator:
Thank you for that Ayanya. That’s quite insightful. However, I do have a comment to make. Honestly, we want people to be able to be at ease, be at comfort, be safe on the internet, not have to restrict themselves from using the internet or using social media. So I think this is something that we actually have to talk about again another session maybe or towards the end of this session about how we also have to talk about data, making sure that data is utilized properly with purpose, not just for spam calls like you experience. I will move to Pedro who’s online. Pedro, my question to you is, do multinational platforms care about the legal and cultural particularities of the countries in which they operate?

Pedro de Perdigão Lana:
Yeah, I will try to shorten up my presentation as well so we can give the floor back to Jonas at the end of this section so he can finish his. But I do think they care because especially generating conflicts with geocultural and legal particularities of the markets in which you are trying to sell your services usually mean less profits or at least more costs. But these concerns just go as far as the immediate costs of this adaptation can be considered not too high and this is a problem when you consider the difficulty of measuring the indirect and long-term costs that platform would certainly suffer in a fragmentation scenario. For example, while platforms investigated in the research project translated their main pages about intellectual property policies, but when you browse for more details, you’ll notice that not even something as simple as the translation of some pages were normally done, or even the interlinks led us to English versions. One of them, which was not content-based, had only the most basic page translated.

Moderator:
And how is this reflected in the global human rights system that, as a rule, it still has the sovereignty of national legal systems that determine the factors of jurisdictional conflicts, rather?

Pedro de Perdigão Lana:
Well, I think that this reflects directly on human rights. Intellectual property is itself globally considered a human right, but what I mean here is that, although we have some international frameworks, human rights are not interpreted the same worldwide. So freedom of expression is a good example. Some cultures see it as a much broader right than others. Copyright itself may be stronger or weaker when confronted with other fundamental rights, such as education or access to culture. So if platforms need to re-inform their policies around such concepts, they should at least do it in a way that is not so clearly unbalanced towards a single perspective. Specifically saying that the user should follow as a guidance on external legislation is, quite frankly, a bit offensive, since it really wouldn’t cost that much to get someone to do a quick review on the legal policies to deliver some adaptation, even if superficial. The problem here is this image that those platforms simply do not care to some basic elements of some societies that they have as markets for their services and products, especially when we see that they can evidently adapt. So as one can observe with changes made because of the German legislation called NSDG, especially on social media.

Moderator:
All right, thank you. Thank you for that. And I will move on to Tevin here. Tevin talked during his opening, has talked a lot about what Jersey is doing with regards to working with the communities, especially the marginalized communities. And I wanna ask you, how can digital literacy and digital skills training be re-imagined in a way that is such that it empowers the marginalized communities and bridges the digital divide? And in such a way that it ensures that everyone has the necessary tools to fully participate in the digital realm?

Tevin Gitongo:
Thank you for the question. And I think I’ll pick up from the question you asked earlier on do large entities care about the legal and cultural considerations. I think the lesson I’ve learned is you have to care about the cultural considerations for you to have any impactful trainings or digital skills. It’s a case of you have to bring yourself to the level and be there with the partner that you want to achieve the training to. And maybe thinking of it practically is, so how do you do that? How do you actually demonstrate that you are aware of the person’s context and how you can help them to kind of bring them up to where you want them to be in terms of lessening the digital divide? And I think how I look at it is, I normally look at it like a four step. And the first one is the stakeholders that you work with. Because more or less or not, you’re always guilty of working with stakeholders who have no clue what’s happening on the ground. You know, you go there, you tell them you’re going to do this, then they tell you, yes, we’ve done a training. Then you go on the ground, you realize, oh, this was a wrong stakeholder. Clearly, I did not understand what was happening in this context. And that way, you’re really, really. the training doesn’t have impact. The next thing I think I look to is accessibility. And how I look at that is in relation to democratizing the knowledge. And by this I say, when you do a training, it should be one that you are actually transferring knowledge, not just ticking a box. I’ve seen there’s a huge difference there, because most cases we are ticking boxes, but you’re not actually transferring knowledge, a knowledge that actually helps them grow. And one of the things we’ve done with that, I’ll give an example of, and I see my colleague is also here, when we were developing the AI chatbots, we basically brought, because it is a skill we were trying to transfer, we brought Kenyan developers in the room, we brought other developers, I think it was from Europe, who have expertise in developing such models. And we were like, we want you guys to teach them, to teach each other, actually, not just to teach them, it’s to teach each other how to develop this, because they are coming with indigenous knowledge of how Swahili works to develop an NLP in Swahili, or a mixture of Swahili and English. Maybe they come with the knowledge of how to develop these systems. And what happened is after they built the first system, the next system that we’re developing, because we’re developing another one for the Kenya’s Data Protection Commissioner, it’s the Kenyans that are running the show now. It’s them who are developing everything. So you start seeing, you’re slowly reducing that gap. The next thing is affordability, of course. If you really want to create any impact, you have to create training that people can, it also goes back to accessibility. And lastly, inclusion of everyone. And this can also be done practically, and one of the things I think I mentioned we assisted developing is the ICT standards for persons with disabilities for Kenya. So whenever you’re designing a system, how do you design it for persons with disabilities and you don’t leave them, given that Kenya is digitizing a lot, but we are forgetting that whole area. So, I think that’s it. I think that’s it. Thank you.

Moderator:
Thank you so much for that, Tevin. I think that, you know, with everything that all the panelists have said, it always goes back to bridging the digital divide. Digital skills, making sure that people are aware of these things and they know how to protect themselves, they know how to use it, and they know what the issues are and how to tackle them, and when it comes to any matter of Internet governance, it’s very important to make sure that people are aware of these things and they know how to protect themselves, and that’s what we’re going to talk about today. We’re going to go back to Jonas, who had issues online, but I think we have some time that we can spare. So, he’s back now, and he was going to tell us about, we were talking about the ways in which cheap labour from the global south empowers contemporary digital products and services. Jonas, can you please tell us about why these conditions are so bad, and how is the Fair Work project working on improving these conditions?

Jonas Valente:
These conditions are bad because platforms find, found a way, I think my connection will, I think I will freeze again. I hope I don’t. Because platforms found a way of sync-conventing digital labour and social protections, and by doing that, companies can hire cheap labour, and that’s why we’re seeing low-pay health and safety issues. So, we’re seeing a lot of platforms and management problems all around the world. A study has estimated 163 million online workers. So, this is a very representative number of people. The Fair Work project assessed that platforms all around the world in those 38 countries, so we analysed and scored those platforms according to five principles. So, pay conditions, contract management, and representation. So, I invite all of you to visit our website, fair.work, and you can see, maybe, platforms from your country and check what they are doing. doing or what they are not doing to meet basic standards of fair work.

Moderator:
Okay, thank you for that. I would like to thank all our speakers both on site and online for sharing their insights, sharing their experiences and the efforts that they’re working on and I would open the floor to four questions both on site and online. I don’t know if we, online moderator, do we have any questions online? If you’re on site and you have a question you may go to one of the standing mics. You state your name, the country you’re from and go ahead with your question. We have one question on site.

Audience:
Hi, my name is Daniele Turra. I am one of the youth ambassadors from Internet Society. I’m from Italy. So as the panelists anticipated, I am understanding there are a lot of stereotypes such as specific legal diversities that are not always respected, also lack of accessibility, also the need to respect privacy and all these different problems and needs are not always really respected and all of that is because of economic patterns and interests worldwide. But some of them, for example privacy, I would argue are also global rights. We can discuss about also being them human rights. I would very be interested to see, let’s say, a taxonomy of specific local needs that are not respected by specific technologies of the global north when it comes to culture, history or political characteristics. So I would like also to understand which are shared also with the global north and which are not. And with not I mean not regarding people originally born in the Global South that lately got to live in the Global North, but specifically populations that plan to thrive in their own country of origin. So the idea is to understand which needs are local and which are global.

Moderator:
Thank you. Okay, do you want Jonas to answer that or it’s open to any of the speakers? Sir? Okay, so Jonas, do you wanna take up this question?

Jonas Valente:
No, I think that what I would like to say is that when we talk about this national and cultural context, what Fair Work Project is bringing is that we have one very serious problem that has been addressed here by other speakers that is the biases, discrimination that is faced by users. But we also need to consider what is behind the digital technology production. And that’s why we highlight this discrimination and the consideration of the local context. And for instance, when Pedro brings the discussion on national regulations, we also need to consider as well the national regulations about work and how those national regulations, the national and local context and the different populations and the diversity of populations and cultural expressions can be considered in its own characteristics in the internet as a whole, but especially in digital labor platforms and global platforms. And that’s why I believe that this discussion that Pedro brought and now that we have the conversation needs to look to those diverse contexts and groups. And at the same time, think about how to incorporate them also not only in the digital and data practices, but in the regulatory. efforts as well.

Moderator:
Okay, thank you. I’m also going to invite Tevin to address the question in a very brief remark. Yes.

Tevin Gitongo:
So, maybe you asked what are local, what are international. So, international I’ll say privacy. I think you’ve alluded to it. It affects everyone, doesn’t matter if it’s global north or global south. We see it in Kenya every day. We have a data commissioner’s office and we actively work with them and the same issues that are raised in European countries are the same issues that are raised in Kenya even when it comes to AI products. It’s why am I getting these marketing messages, how did you take my data, issues of consent, where are you using it, where. Kenyans have become very interesting. They’ve been asking where are you transferring the data to. You know, they’re asking questions that you will find anywhere and this is not just I can say the urban Kenyan, it’s even the rural Kenyan. You’ll go and talk to them and they’re like okay I saw this application, however they told me to do this and I’m wondering why they told me to do this. So, it’s something that everyone is aware of. In relation to local, I’ll say languages because when you’re developing, for example, natural language processes, sadly most of them are geared towards global north. How the English, the pronunciation of the words is very different, the language is being used but it’s time we start looking at local aspects of it, especially languages because that’s the only way you start bridging the digital divide because not everyone will speak fluent English or fluent Swahili and you need to develop products that cater for their needs.

Moderator:
Yes, and that is going to bring me to a question for Pedro. Pedro, is the risk of regarding the search for balance of power relations between countries, is it a risk and how does this affect the Internet as a global network?

Pedro de Perdigão Lana:
Yeah, I would like to build upon what was just said by the previous speaker, that I would use the same example but inverted. I think language is an international issue, because even though we adapt to each country, it’s the same issue that we have around the world. Privacy on the other side, you can have different interpretation of privacy, what’s most important, what’s not. And that’s exactly what is especially dangerous when we are talking about platforms not diversifying what they are doing and how they do not do that in an international level. They prefer some regions to the others. So in a period that international relations are becoming increasingly tense and discourses against external threats are on the rise, it seems very easy not just to expose those true facts about how these relations work, such as talking about how these platforms may be an instrument of expanding the influence of a certain country or even acting directly on their behalf, as we learned with Snowden, but it’s also easy to extrapolate this context to get support for action that presents the international nature of the Internet as a problem in itself. So doing those small things, such as translating the contents correctly, adapting to national legislation, may be exactly what we need to avoid having a splinternet, having the Internet as we know it severely affected in an active way.

Moderator:
Thank you. Thank you for that. We have learned that we have some questions from the online participants and I would like to call on to Nelly, our online moderator, to ask the questions out loud for the audience. Nelly, you may take the floor please. Are you with us, Nelly? Okay, it seems Nelly is not with us and any other question from the audience here, on-site participants. Nelly, we think you’re muted, Nelly. Please unmute your mic and take the floor. Technical, can you please give, can you please help us give Nelly the floor to Nelly, please? Unmute her mic. Okay, if there are no other questions, and it seems we’re rounding up the session today. I’m actually very thrilled again to invite our speakers to share their invaluable recommendations to the following. What should be good? Was that Nelly? Nelly, is that you? Okay, we’re gonna go ahead. If Nelly happens to unmute her mic, we’ll just take questions from her. But until then, we’re gonna, I’m going to ask our panelists here who have shared their insights and experiences for their recommendations regarding the following questions. What should decolonizing digital rights look like? But before I give you the floor, I would also like to strongly encourage the audience to seize this opportunity to share your recommendations again by scanning the QR code that’s displayed, that’s going to be displayed on the screen shortly. And now I would like to welcome Ananya. Please go ahead, tell us what should decolonizing the internet look like?

Ananya Singh:
I think this has just been done because I finished ahead of time. All right. Well, let’s say this, my blood group is B positive. There you go. You have another one of my personal data points. Anyway, being the positive person that I apparently am, I believe that every cloud has a silver lining. So this cloud of data colonialism presents an opportunity for us, an opportunity to create ethical systems which run on the principles of, A, ownership by design, where users are provided with clear and understandable information about how their data will be collected, used, and processed, shared, stored, or erased. It involves obtaining informed consent that is granular and specific, allowing individuals to make informed choices about their data. B, minimization and anonymization. Only the necessary and most relevant data is collected and processed. And wherever possible, such data is kept anonymous and encrypted. This reduces the risks associated with data breaches and unauthorized access while respecting individuals’ privacy. C, there should be an option to be forgotten or easily revoke consent when desired. I know there are options to be forgotten, but the option to revoke consent has been a complicated process so far. D, mechanisms for accountability and redressal in case of data breaches or privacy violations are hard to find. This involves providing individuals with avenues to exercise their rights, report violations, seek remedies for any harms. And this should and must go beyond blocking and reporting accounts. And E, I just want to finally take note of this. The whole entitled attitude that makes data colonialism possible must be done away with. Spelled simply, for example, I was born with a name. My name is a data point. Just because I provided my name to my school on the day of enrollment does not automatically translate into their unprecedented right over an unchecked use of my name for the rest of their existence. Data use is not a right, but it’s a permission. data reuse is not an entitlement, but once again, a permission slip. Thank you.

Moderator:
Thank you. Thank you so much for that, Ananya. And I think we have access to Nellie now, so we’re going to take the question from her online. Nellie, you may unmute your mic, please, and ask the question to our panelists.

Audience:
Thank you for letting me turn on my mic. Initially, the question arose, according to your very interesting discussion, is like this. How can digital literacy and skills training be reimagined to empower marginalized communities and bridge the digital divide, ensuring that everyone has the necessary tools to fully participate in the digital world?

Moderator:
Can you please repeat the question, Nellie?

Audience:
Yes, of course. How can digital literacy and skills training be reimagined to empower marginalized communities and bridge the digital divide, ensuring that everyone has the necessary tools to fully participate in the digital world?

Tevin Gitongo:
Okay, Tevin is going to take the question.

Ananya Singh:
I think just to help Tevin answer the question, I think it basically means how we could use or at least program and structure digital literacy programs, which would, I assume, will help people to better navigate in a world which is more decolonized. So how could digital literacy aid the process of decolonizing the Internet?

Tevin Gitongo:
Yes, I think for that, I’m a proponent of, and I think I keep on reiterating, you have to bring yourself to the shoes of the person. So I’m going to give a good example of, we were having a discussion recently, there is this, in Kenya we have this So, we have a lot of women who are selling food, and we have a lot of women who are selling groceries, and we have a lot of women who are selling groceries, and we have a lot of women who are selling groceries. So, they have this little talk shops, whenever you go, you go buy groceries. And we were thinking how do you enable them, for example, use digital tools to enable the sale of their products. So, that’s what we were thinking about. So, that’s what we are trying to do with our projects, is it’s always us telling them come, but now it’s how do we go to them, and how do you go to them at their level and work with their skill, because they really have a lot of that skill, and just empower that. And I think that’s what the challenge and discussion should be. And it’s also something that we should be thinking about, like, how do you go there, how do you work with them where they are? And I can’t say we have the complete answer to that. It’s a learning process, but I’m a big proponent of find people where they are. Don’t make them come to you, because that’s more burden. You look for them and work with them from where they are, because one of the things that study was showing, even when you were talking to them, was how much knowledge they have. They do have a lot of that. For example, one of them was telling us, you know, you click on this, and I don’t know what I’m clicking on, but it doesn’t make sense when I read it, as Anja just said. It’s the terms and conditions, and it’s, like, 30 pages. Just say I agree, and you move on. But they are cognizant of the fact I’m giving away my data, that they’re cognizant of, so perhaps it’s coming to them, and also breaking it down to a point that they also understand.

Moderator:
I like that you mentioned that, because there’s a recent principle that I learned in digital development that, you know, you have to design with the user. If you’re, because at the end of the day, if you’re looking to benefit them and they need to be actively involved in the process, you need to know what their challenges are, what their perspectives are, what they think is gonna benefit them and include that in the process. And that’s fair too. I would like to continue the recommendations that we’re getting from our panelists about what should de-economizing digital rights look like.

Audience:
Ananya has given hers, so we’re gonna move on to Jonas online, who will share his. By the way, please note that there is, I’m actually sharing my screen and there’s a QR code where you’re supposed to, it should be on the padlock, it should be on the screen. Yes, that one. You can just scan it and, or I’ll send the link in the chat for the online participants as well to make their comments as well. So Jonas, please share your recommendations on what decolonizing digital rights should look like.

Jonas Valente:
Thank you so much. I would say that decolonize digital technologies involve not only decolonize the use of digital technologies, but also the production process. And that’s why we need to incorporate the labor dimension to our decolonization agenda. And this means to ensure not only basic standards of fair work that it’s why, what we are assessing in our project, but a radically and structurally different work arrangement where workers are not exploited, where we don’t have international, national, local and population groups and symmetries. And we are where workers are not exploited anymore. So I believe that we need to incorporate this to our agenda and to quote a Latin American philosopher called Enrique Dussault. He says that it’s not only about decolonize, but it’s about to liberate oppressed people and to create something radically new.

Moderator:
Thank you for that, Jonas.

Shalini Joshi:
Shalini. Yeah. I’m going to be very brief and say that in order to decolonize digital rights, it’s really important to look at who’s being included in the process of creating digital tools. We have to involve hyperlocal communities in creating data sets, something that I talked about earlier as well. We also have to make sure that there are people from marginalized communities who are involved in analyzing the data, annotating the data, in actually creating the technology, because it’s these people who understand the context, the language, and the issues much more than technologists and coders and developers sitting somewhere else. So involving the people in the creation of the technology, making processes more inclusive, ensuring that many, many languages are being included in the way that we analyze data, all of that is really important.

Moderator:
Thank you. Thank you for that. And Pedro, what are your recommendations?

Pedro de Perdigão Lana:
Yeah, I already talked a bit on my last comments, but just to be very brief, I think that platforms should try to diversify a little bit more to adapt to those local cultures and scenarios and countries that are historically more influential in directing how the internet is modeled should actively try to share these powers, these capacities. It’s just not about just decolonizing the digital space, but preserving the internet as you know it, as a global network and a force for good. So, that’s it. Thanks for your attention, everyone.

Moderator:
All right, thank you. And we’ll take our last recommendation from our final panelist, Tevin.

Tevin Gitongo:
Perhaps my recommendation will be to ask ourselves four fundamental questions. And some of them have been alluded. The first one is, who? Who’s developing the systems? Second one is, why are they doing it? In most cases, it’s for economic gain. If you’re being honest, the baseline of this whole conversation is economic gain. And what they stand to benefit. As Anja said, data is the new sun, and let’s call it the politics of data. Everyone wants to be the ruler of data now. Second is, where are they being developed? Are they being developed where the marginalized people you’re targeting are? Because where it’s being developed, because someone sitting in Silicon Valley, to be quite honest, they’re not really thinking of me as a Kenyan using their AI product. I am the last person in their mind. Because of where they are. The last thing is, what? What is it for? By the end of the day. Yeah, thank you very much.

Moderator:
Yes, that’s very true. And I think all our panelists have shared very thought-provoking and insightful experiences and insightful expertise on this topic. But as we conclude this session today, I’d also like to express my sincerest gratitude to our online and onsite panelists for their expertise and thought-provoking contributions. Your insights have been very instrumental to deepening our understanding of the complexities that surround the decolonization of the Internet and technology. And I’d also like to thank the audience, of course, both onsite and our online audience, for your engagement and for your questions and for being here today. Your participations have very enriched our discussions. In closing, I would like us to remember that the journey towards a decolonized Internet and digital landscape, it’s ongoing. It’s not static. It’s not something that’s already established. It’s ongoing and it’s a learning process. It requires continuous reflections, dialogue, and call to actions. He talked about who’s benefiting, what, and economic gain and all of that. And I think that together, we can strive for a digital space that is inclusive and respects and empowers all individuals, all communities, regardless of their background, regardless of their geographical location. We have to work together in order to create a future where the internet truly becomes a force of equality, justice, and liberation. Thank you, and that is it for this session. Thank you all.

Audience:
So we have another session in, by, well, in, I think, 22 minutes. I’ll take it from there. 22 minutes, yes. So we have another session happening here at 1730 Japanese Standard Time. So I hope you’ll stay with us. If you want, you can quickly grab something to drink or eat meanwhile, and if you’re going outside, I would request you to bring in your colleagues and friends to join us for the next session. Thank you very much for attending. Thank you.

Jonas Valente

Speech speed

157 words per minute

Speech length

1618 words

Speech time

618 secs

Pedro de Perdigão Lana

Speech speed

168 words per minute

Speech length

1536 words

Speech time

549 secs

Ananya Singh

Speech speed

168 words per minute

Speech length

1898 words

Speech time

678 secs

Audience

Speech speed

158 words per minute

Speech length

519 words

Speech time

197 secs

Moderator

Speech speed

169 words per minute

Speech length

3814 words

Speech time

1356 secs

Shalini Joshi

Speech speed

133 words per minute

Speech length

1247 words

Speech time

561 secs

Tevin Gitongo

Speech speed

218 words per minute

Speech length

2718 words

Speech time

749 secs

European Parliament Delegation to the IGF & the Youth IGF | IGF 2023 Open Forum #141

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Mr. Lagodinsky

The European Parliament’s approach to artificial intelligence (AI) and generative AI focuses on regulation rather than a complete ban. The regulation primarily targets high-risk applications and generative AI to ensure responsible and safe use of these technologies.

One of the driving forces behind this approach is the recognition of citizens’ unease regarding AI technology. There is growing concern among the public about the potential risks and implications of AI, leading to a closer examination of the European Union’s regulation strategy. This scrutiny extends beyond European borders, with Africa also closely observing these developments.

The Parliament emphasizes the importance of striking a balance between protecting small and medium-sized enterprises and safeguarding fundamental rights and environmental standards. While there is a need to avoid overregulation that could stifle innovation and burden businesses, it is equally crucial to establish regulations that prioritize human rights and environmental sustainability.

By taking a supportive stance towards the regulation of AI, the European Parliament acknowledges the need for a careful and measured approach. It recognizes the concerns of small and medium-sized enterprises that prefer to avoid excessive regulation while understanding the value of protecting fundamental rights and environmental standards.

Overall, the European Parliament aims to establish regulations that create an environment where AI technology can thrive while ensuring its responsible use that promotes individual well-being and environmental preservation. This approach aligns with the United Nations’ Sustainable Development Goals, particularly SDG 9 (Industry, Innovation, and Infrastructure) and SDG 16 (Peace, Justice & Strong Institutions). It demonstrates a commitment to addressing the ethical and societal implications of AI technology and sets an example for other regions and countries grappling with similar challenges.

Nathalie

In order to address the emerging online threats and vulnerabilities affecting children, there is an urgent need for a comprehensive online risk assessment. This assessment can provide valuable insights that will inform policies and industry standards aimed at protecting children online. By understanding the specific risks and vulnerabilities that children face in the digital world, stakeholders can develop targeted measures to safeguard their well-being.

It is crucial to recognize that the online landscape is constantly evolving, with new risks emerging all the time. Therefore, a comprehensive assessment is necessary to ensure that policies and industry standards remain effective and up to date. By identifying and analyzing these risks, decision-makers can better understand the scope and severity of the challenges faced by children in cyberspace.

To successfully protect children’s rights online, it is essential for governments, companies, academia, educators, and civil society to collaborate. Each stakeholder brings unique expertise and perspectives to the table, making multi-stakeholder collaboration studies vital in reducing online risks. By working together, these different entities can share knowledge, resources, and best practices, and develop comprehensive strategies to safeguard children and promote their digital well-being.

Moreover, this collaboration is not just limited to protecting children’s rights, but also contributes to the global partnership for sustainable development. The need for a safe and secure digital environment is aligned with the Sustainable Development Goal 17.16, which aims to enhance global partnerships for sustainable development. By engaging in multi-stakeholder collaboration, stakeholders can collectively work towards creating a safer online space for children, supporting the broader goal of sustainable development.

In conclusion, a comprehensive online risk assessment is crucial for addressing the evolving online threats and vulnerabilities faced by children. It provides the necessary insights to shape effective policies and industry standards. Additionally, multi-stakeholder collaboration studies are of paramount importance in reducing online risks and protecting children’s rights. The involvement of governments, companies, academia, educators, and civil society is essential for enhancing the global partnership for sustainable development and ensuring a safer digital environment for children.

Brando

Brando emphasises the need for the involvement of young people in the design and governance of AI policies. He recognises that young people bring a unique perspective and understanding, which is essential in shaping policies that are relevant and effective. Brando is actively working on the AI Act which includes a clear reference to the importance of stakeholder involvement, including young people.

In addition to his focus on youth involvement, Brando also recognises the crucial issue of understanding and handling the tension between democracy and new technologies. He believes that this issue requires more engagement from young people, similar to the global mobilisation they have shown for climate issues. Brando commends the efforts of young people in advocating for climate action and sees a need for similar engagement in addressing the challenges posed by new technologies.

Brando’s work extends beyond mere recognition and advocacy. He is actively involved in negotiating for the inclusion of stakeholder involvement in the parliament text of the law. By doing so, he aims to ensure that the perspectives of young people and other stakeholders are considered and integrated into the decision-making process.

Overall, Brando’s stance highlights the significance of youth involvement in shaping AI policies and addressing the tension between democracy and new technologies. His recognition of the global dimension in legislative work and the need for stakeholder engagement reflects a comprehensive and inclusive approach. By actively working towards these goals, Brando aims to create policies that are democratic, equitable and responsive to the challenges of our rapidly evolving technological landscape.

Peter

The involvement and interests of the youth community have greatly enhanced the Declaration for the Future of the Internet (DFI) process. A successful half-day workshop, held on the first day of the Internet Governance Forum (IGF), had youth IGF rapporteurs participating as animators and reporters. This workshop emphasised the importance of the DFI and highlighted the critical role of youth in shaping the digital future and the governance system of the DFI.

The main objective of the DFI is to integrate governments that are already part of the multi-stakeholder process into various communities. This approach aims to bridge the gap between the government and other stakeholders, including civil society, academia, the business sector, and most importantly, the youth. By involving diverse stakeholders, the DFI ensures that concerns from different communities, particularly the youth, are considered.

It is argued that the DFI provides an opportunity for governments to become more aware of concerns raised by various communities, including the youth. By actively involving governments in the multi-stakeholder process of the IGF, the DFI aims to make them more engaged and informed decision-makers. This facilitates a democratic approach to internet governance by incorporating diverse perspectives.

Furthermore, governments that believe in democratic principles and a human-centric nature of the internet are encouraged to support and sign up for the DFI. By participating in the DFI, governments can engage with like-minded countries and have meaningful interactions. Additionally, the DFI plays a significant role in the Global Digital Cooperation (GDC) process and the World Summit on the Information Society Plus 20 (WSIS+20) discussions.

In conclusion, the active involvement and interests of the youth community have positively influenced the success of the DFI process. The DFI seeks to bring governments closer to the multi-stakeholder process of the IGF and raise awareness about the concerns of different communities, including the youth. Governments that value democratic principles and a human-centric internet should actively support and participate in the DFI. By doing so, they can engage with like-minded countries and play a significant role in shaping the future of internet governance.

Regina Fuxova

Regina Fuxova, a member of EURID, recognizes the Youth Committee as an integral aspect of the company’s corporate governance. This committee serves as a platform for inspiration and the dissemination of information concerning EURID’s activities, providing members with new opportunities to enhance their future careers. The involvement of young people in the committee is testament to EURID’s commitment to youth inclusion.

EURID goes beyond youth involvement solely within the Youth Committee and extends it to activities for smaller children, such as Code Week.eu. This inclusion emphasizes the importance of involving young people in various aspects of EURID’s work. By engaging young individuals in activities such as Code Week.eu, EURID demonstrates its dedication to fostering a sense of inclusion and inspiring young minds.

EURID’s commitment to raising awareness about cybersecurity and Internet governance is demonstrated through initiatives like the ‘Safe Online’ art competition. This competition, designed for high school students, aims to start conversations about these vital issues with teachers and, indirectly, with parents. By organizing such events, EURID actively spreads awareness about the importance of cybersecurity and Internet governance, contributing to the UN’s sustainable development goals of Decent Work and Economic Growth and Industry, Innovation, and Infrastructure.

Regina Fuxova further showcases her support for EURID’s youth inclusion initiatives by suggesting that the organization shares its best practices with other peers in the field. This proposal highlights her belief in the strength of EURID’s approach and suggests that other organizations could benefit from implementing similar strategies. Through sharing its best practices for youth inclusivity, EURID can inspire and guide other entities in their own efforts.

In conclusion, Regina Fuxova’s perspective on the Youth Committee as a vital component of EURID’s corporate governance, EURID’s commitment to youth inclusion through activities like Code Week.eu, its efforts to raise awareness about cybersecurity and Internet governance, and Regina’s suggestion to share best practices all reflect EURID’s dedication to youth involvement and inclusive practices. These initiatives contribute to the achievement of the sustainable development goals of Quality Education, Reduced Inequalities, Decent Work and Economic Growth, and Industry, Innovation, and Infrastructure.

Collegue

The discussion centred around the Hiroshima process, which aims to enhance global cooperation among G7 countries in the field of Artificial Intelligence (AI). This process complements the AI Act introduced by the European Union (EU), which seeks to ensure AI systems undergo a risk-based security analysis.

The EU places significant emphasis on developing AI that is human-centric and aligned with fundamental rights. It actively works towards legislation addressing the ethical concerns of AI, aiming to establish regulations that guarantee responsible and accountable AI use.

The EU encourages a multidisciplinary approach to AI, recognizing its complexity and the need for input from various sectors and stakeholders. Discussions have taken place on establishing a multi-stakeholder forum to foster collaboration and knowledge sharing. These initiatives demonstrate the EU’s commitment to engaging the international community and avoiding isolation in developing and regulating AI technologies.

Overall, participants supported regulating AI while promoting innovation. They advocated for a framework for AI regulation akin to the regulation of medicines, ensuring appropriate scrutiny and oversight while allowing room for advancement.

The analysis primarily focused on the positive sentiment surrounding AI regulation and innovation, indicating a widespread recognition of the need for responsible and ethical AI development. The emphasis on risk-based security analysis, human-centric AI, and the multidisciplinary approach highlights a strong desire to align with international standards and respect fundamental rights.

In conclusion, the discussion underscores the importance of global cooperation, multidisciplinary collaboration, and ethical considerations in AI regulation and innovation. The EU and participating countries are committed to creating a regulatory framework balancing innovation and safeguarding individual rights and well-being.

Yulia Mournets

The Youth Internet Governance Forum (Youth IGF) has actively contributed to shaping the future of internet policies, with a particular emphasis on involving young leaders in decision-making processes. Yulia Mournets, a key figure in the Youth IGF, stressed the importance of dialogue between the youth and the Internet Governance Forum (IGF) in influencing the policies that will shape the future of the online world.

The European Parliament delegation has shown potential support for a working group focused on the IGF. This is a positive development, as it indicates that the youth’s perspective and participation in internet governance are being recognized and valued by influential stakeholders.

The Youth IGF has made significant recommendations for a digital compact, one of which is the establishment of youth advisory committees within private sector structures. This recommendation aims to ensure that young people have a voice in decision-making processes related to internet policies. Notably, the Youth Advisory Committee created by EURID serves as a successful example of implementing such recommendations.

Under the presidency of the Czech Republic, the Youth IGF actively participated in several meetings, which demonstrates their dedication and commitment to advocating for youth involvement in internet governance. This involvement extends beyond Europe, as the Youth IGF has established more than 10 safe internet committees in African countries, highlighting their global reach and impact.

The Youth IGF has also played a significant role in the child online protection initiative of the International Telecommunication Union (ITU). Their contribution to this initiative underscores their commitment to ensuring a safe and secure internet environment for young people.

Furthermore, the Youth IGF’s recommendations have led to the establishment of a special category for the .EU award, which focuses on recognizing the achievements of young entrepreneurs. This acknowledgement of young entrepreneurs’ contributions aligns with the Sustainable Development Goal 8 (Decent Work and Economic Growth) and further solidifies the Youth IGF’s influence in shaping policies that support economic opportunities for the youth.

In conclusion, the Youth IGF has actively participated in shaping internet policies, with a particular focus on involving young leaders in decision-making processes. Their efforts have been acknowledged and supported by entities such as the European Parliament delegation, and their recommendations have led to the successful implementation of initiatives such as youth advisory committees and the .EU award category for young entrepreneurs. The Youth IGF’s impact extends beyond Europe, with their involvement in meetings under the Czech Republic presidency and the establishment of safe internet committees in African countries. Ultimately, their dedication to advocating for youth participation in internet governance has made a positive contribution to the future of the internet.

Muhammad

The analysis discussed the importance of including youth in the digital governance sector and cooperation sector. It emphasized that youth are not only current stakeholders but also future leaders in digital transformation. Their active involvement in digital governance is crucial for shaping policies and strategies that will have a long-term impact on the digital world.

One noteworthy individual mentioned in the analysis is Muhammad, who serves as the Generation Connect Youth NY for the Asia-Pacific region with the International Delhi Communication Union. His interest in digital governance further underscores the importance of youth engagement in this sector. His involvement brings valuable perspectives and insights that can contribute to the development of effective digital governance mechanisms.

The argument put forth is that youth, as the ones who embrace digital transformation most passionately, should be included in the digital governance infrastructure. This inclusion is seen as essential for ensuring the continuity of digital knowledge and skills to future generations. By actively involving youth in decision-making processes, their unique experiences and perspectives can be leveraged to develop inclusive and sustainable digital policies.

Furthermore, the analysis highlighted that including youth in digital governance and cooperation aligns with several Sustainable Development Goals (SDGs). These include SDG 4 – Quality Education, SDG 9 – Industry, Innovation and Infrastructure, and SDG 17 – Partnerships for the Goals. Involving youth in digital governance not only supports their educational development but also promotes innovation and fosters collaborations that drive positive change.

The sentiment towards the importance of including youth in digital governance is consistently positive throughout the analysis. It is clear that all speakers recognize the value of youth contribution in the digital governance sector and believe in their potential as agents of change. By creating an inclusive and youth-centered digital governance ecosystem, societies can harness the immense talent and creativity of young individuals to shape a future that is technologically advanced and socially equitable.

In conclusion, the analysis and observations made strongly advocate for the inclusion of youth in the digital governance and cooperation sector. Youth are not just passive consumers of digital technologies but active participants and drivers of digital transformation. Their perspectives and insights are vital for creating sustainable and inclusive digital policies that benefit present and future generations. By involving youth in decision-making processes and fostering collaborations, we can harness their potential to shape a technologically advanced and socially equitable digital future.

Herman Lopez

Herman Lopez, a member of the standing group of the Internet Society, has expressed concern regarding Latin America’s limited participation in global Artificial Intelligence (AI) discussions. Lopez highlights the absence of Latin America in AI talks, while noting the active engagement of India and Africa. He advocates for the inclusion of Latin America, emphasising the importance of reducing inequalities and promoting representation.

Lopez’s concern arises from the fact that Latin America has seemingly been excluded from AI discussions, despite the potential contributions the region could make and the need for diverse perspectives in shaping AI policies and implementation. He argues that this exclusion prevents Latin America from influencing the development of AI systems that address its specific needs and challenges.

By highlighting the active involvement of India and Africa in shaping global AI discussions, Lopez provides evidence of other regions’ participation. This highlights the importance of Latin America having a voice in these discussions, to ensure its interests and perspectives are considered in the development of AI technologies.

Lopez’s call for the inclusion of Latin America in global AI discussions is driven by the goal of reducing inequalities. He believes that AI has the potential to exacerbate existing inequalities if it is driven solely by the interests of powerful countries or regions. By including Latin America, with its unique socio-economic context and challenges, in these discussions, Lopez argues for a more inclusive and equitable approach to AI.

Furthermore, Lopez emphasizes the importance of representation in AI discussions. By including Latin America, a region with diverse cultural, social, economic, and political contexts, decision-making processes around AI can be enriched. This diverse representation can lead to a comprehensive understanding of the implications of AI on different communities and ensure that the development and deployment of AI technologies are fair and inclusive.

In conclusion, Herman Lopez expresses concern regarding Latin America’s limited involvement in global AI discussions, while noting the active participation of India and Africa. He advocates for the inclusion of Latin America, highlighting the need to reduce inequalities and promote representation. By giving Latin America a voice in shaping AI policies and technologies, Lopez believes that a more inclusive and equitable approach to AI can be achieved, mitigating the potential adverse effects of unchecked AI development.

Irena Joveva

The speakers in the European Parliament discussed several important topics related to youth and digital literacy. Irena Joveva, the youngest elected delegate from Slovenia, emphasised the need for greater inclusion of the younger generation in the European Parliament, expressing appreciation for their involvement. This aligns with SDG 16, which aims to promote peace, justice, and strong institutions.

One speaker highlighted the importance of media freedom and the fight against disinformation. They mentioned their role in the recently adopted Media Freedom Act and the initiation of inter-institutional negotiations, taking a positive step towards protecting media freedom and democratic principles. This promotes transparency, accountability, and informed decision-making, all crucial for SDG 16.

The undervaluation of digital literacy, especially among young people exposed to the digital world, was also discussed. The speakers emphasized the need to give digital literacy the recognition it deserves, as it plays a significant role in achieving SDG 4, which focuses on quality education.

Furthermore, the speakers called for increased efforts from schools and politicians in promoting digital literacy. This raises questions about the responsibility of educational institutions and policymakers in ensuring that young people have the necessary digital skills. This argument aligns with SDG 10’s goal of reducing inequalities, promoting digital inclusivity, and bridging the digital divide.

In summary, the analysis highlights the importance of youth involvement in the European Parliament, the need to protect media freedom and combat disinformation, and the undervaluation of digital literacy. It also prompts further exploration of the responsibilities of schools and politicians in promoting digital literacy. By addressing these issues, policymakers and stakeholders can work towards building a more inclusive and digitally empowered society.

Ananya

Three key arguments related to youth participation in digital technologies were presented. Firstly, it was emphasised that young people should be involved as stakeholders in any process related to digital technologies. This was supported by the fact that Ananya is a youth advisor to the USAID Digital Youth Council and is actively involved in the design and implementation of the Digital Strategy. The significance of this argument is underscored by the statement that digital technologies influence young people’s aspirations, ideas, and lives right from birth. By involving young people as stakeholders, their unique perspectives and insights can be incorporated into the decision-making processes, ensuring that the digital technologies being developed and implemented meet the needs and aspirations of the youth.

The second argument put forward was that young people from diverse backgrounds must be provided with a platform to share their inputs on policies that influence their lives. This argument was justified by Ananya’s suggestion to host consultations, youth summits, site events, networking sessions, conferences, exhibitions, and educational programmes. This inclusive approach recognises the importance of enabling participation from all segments of society and the value of diverse perspectives. Ananya further emphasised the significance of local, national, and international level fora to make the policy-making process more accessible, inclusive, and globally relevant. By actively involving young people from diverse backgrounds, policies can be better informed, resulting in reduced inequalities and stronger institutions.

Finally, it was highlighted that leveraging digital platforms and social media can be effective in engaging young people. Ananya emphasised the creation of interactive online spaces and the use of social media campaigns, hashtags, and online events like webinars to raise awareness and mobilise support from and with the youth. This approach recognises the increasing influence of digital platforms on young people’s lives and the ease with which they can connect and engage on these platforms. Utilising digital platforms and social media provides a powerful tool to reach and involve young people in discussions and decision-making processes related to digital technologies.

In conclusion, the arguments presented highlight the importance of involving young people as stakeholders in the development and implementation of digital technologies, providing a platform for their inputs on policies, and leveraging digital platforms and social media for effective engagement. By adopting these approaches, there is potential to create a more inclusive and impactful digital ecosystem that meets the needs and aspirations of young people from diverse backgrounds. It is vital to recognise the value of youth participation and ensure their voices are heard and incorporated into decision-making processes to build a digital future that is equitable and relevant for all.

Levi

The analysis of the provided information highlights several significant points raised by the speakers. Firstly, there are concerns about the impact of AI, misinformation, and disinformation, especially when perpetrated by certain government officials. This raises questions about the reliability and potential consequences of information in today’s digital age. The speakers have a negative sentiment towards this issue and stress the need for vigilance and measures to combat the spread of false information.

Secondly, the role of youths in internet governance and decision-making is emphasized. As three-quarters of internet usage is by the youth, their involvement becomes crucial in shaping policies and decisions related to the internet. The speakers acknowledge the innovative ideas and perspectives young individuals bring to the table. This underscores the importance of including young voices in discussions surrounding internet laws and regulations. The sentiment towards this point is positive, indicating the recognition of the valuable contributions young people can make.

Furthermore, the analysis reveals a questioning sentiment towards the European Union’s efforts to ensure the sustainability of youth engagement in policy and governance, particularly in the realm of technology and the internet. Levi, one of the speakers, raises doubts about the deliberate actions taken by the European Union to promote youth participation and inclusion. This observation highlights the need for further examination of the European Union’s initiatives and their effectiveness in bridging gaps and fostering sustainable youth engagement.

Lastly, the analysis reiterates the importance of equality and inclusion of youths in decision-making processes to pave the way for a sustainable future. There is a need for deliberate engagement of young individuals to create a sustainable future. This sentiment aligns with the principles of SDG 8 (Decent work and economic growth) and SDG 10 (Reduced inequalities), emphasizing the necessity of empowering and involving young people in shaping policies that directly affect them.

In conclusion, the analysis highlights concerns surrounding AI and misinformation, the significance of youth involvement in internet governance, questioning of the European Union’s efforts in promoting youth engagement, and the necessity of equality and inclusion in decision-making processes. These insights shed light on the complex landscape of internet governance, youth empowerment, and policy-making, prompting further examination and consideration of these issues.

João Pedro

João Pedro, a member of the youth advisory committee, is a strong advocate for the inclusion of youth voices in both private and public institutions. He believes that involving young people in decision-making processes has positive outcomes for all parties involved. João Pedro has found the collaboration between the youth and businesses, such as EURID, to be mutually beneficial.

One area where João Pedro sees potential for improvement is in evaluating strategies such as promoting the .eu domain in different regions of Europe. He suggests that EURID, the organization responsible for managing the .eu domain name, should assess the effectiveness of these strategies within their own structure. This comprehensive approach would provide a deeper understanding of how the .eu domain can be utilized across Europe.

The youth committee, including João Pedro, has been actively contributing valuable insights and feedback to EURID’s activities within the Internet governance ecosystem. Their advisory role positions them to provide guidance and recommendations to EURID, enhancing its decision-making processes.

Overall, João Pedro’s experiences highlight the importance of involving young people in decision-making within institutions. By incorporating youth voices, institutions like EURID can benefit from fresh perspectives, innovative ideas, and a better understanding of the needs and preferences of younger stakeholders.

This case study also emphasizes the significance of youth participation in achieving the Sustainable Development Goals (SDGs), particularly SDG 10: Reduced Inequalities and SDG 16: Peace, Justice, and Strong Institutions. By including young people in decision-making, we can work towards a more equitable and just society.

Christian-Sylvie Bouchoy

The European Parliament, led by Christian-Sylvie Bouchoy, is actively involved in internet governance forums and is committed to supporting the activities of the Internet Governance Forum (IGF). Bouchoy, a member of the European People’s Party and President of the Industry Research and Energy Committee, introduced the members of the European Parliament Delegation to the IGF.

The European Parliament is strongly committed to supporting IGF activities. They have participated in most of the IGF forums and have initiated a letter to President Roberta Metzola to form a permanent working group on IGF in the European Parliament. Additionally, members of the European Parliament are involved in different legislative dossiers on various areas related to internet governance.

In the digital area, the European Parliament is actively developing legislation. They have already adopted legislation on data governance, Digital Markets Act, Digital Services Act, Cyber Security, and the Artificial Intelligence Act. The Parliament is currently engaged in inter-institutional negotiations with the Council and the European Commission to finalize the legislation. These efforts demonstrate the Parliament’s commitment to addressing the challenges presented by AI and ensuring responsible use of technology.

The European Parliament strongly believes that artificial intelligence should not be used for mass surveillance. They are working on a position on artificial intelligence and are particularly concerned about the ethical issues surrounding biometric AI usage. The Parliament advocates for responsible and regulated use of AI.

Youth involvement and consultation in decision-making processes are encouraged by the European Parliament. They recognize the need for stronger and clearer involvement of young people in decisions related to digital legislation and future artificial intelligence. Some young people who are members of the European Parliament are actively connected to the youth and support their participation.

The European Parliament acknowledges the importance of dialogue and cooperation in internet governance. They have strong ties with Latin America and Africa and believe in working closely with them on issues related to internet governance and digital artificial intelligence. They have also suggested the possibility of establishing a similar network in Latin America.

Youth participation, particularly through the European Youth IGF and the public consultation phase, is deemed critical in shaping legislation on internet governance. The European Parliament commends the consultation process with Director O’Donohue and encourages the youth to take part in it.

In conclusion, the European Parliament, under the leadership of Christian-Sylvie Bouchoy, is actively engaged in internet governance and is dedicated to supporting IGF activities. They are actively developing legislation in the digital area and are advocating for the responsible use of artificial intelligence. Youth involvement and consultation are encouraged, and strong partnerships are being established with Latin America and Africa. The Parliament believes in the importance of constructive dialogue and recognizes the vital role young people play in shaping the future of internet governance.

Stefanets

Stefanets actively advocates for the organization of special events at the European Parliament to promote cooperation and involve young people. These events provide a platform for individuals to exchange ideas and establish regular cooperation. By drawing inspiration from the perspectives of the younger generation, senior members of the Parliament can benefit from their insights.

One significant event Stefanets supports is the Youth Forum, where young individuals present their ideas and contribute to discussions on important issues. Stefanets actively participates in the Forum, fostering an inclusive environment that values and encourages young voices. They recognize that many innovative concepts originate from the Youth Forum, highlighting the importance of engaging with young people and leveraging their fresh perspectives.

In addition to youth involvement, Stefanets prioritizes quality education and supports SDG 4. By fostering idea development, Stefanets empowers young individuals to contribute to the Sustainable Development Goals.

Stefanets also focuses on the digital decade, addressing issues such as addictive design and online child protection. They actively engage with children to understand the dangers they face in the digital world, allowing them to shape policies that safeguard their well-being.

The arguments presented by Stefanets reflect a positive sentiment towards promoting youth involvement, idea development, and prioritizing the well-being of children in the digital realm. By encouraging cooperation and engaging with young people, they aim to create a more inclusive and progressive future.

Overall, Stefanets’ commitment to organizing special events, supporting the Youth Forum, and addressing digital challenges showcases their dedication to cooperation, empowering youth, and safeguarding children’s well-being. Their actions align with SDG 16 and SDG 17, focusing on peace, justice, strong institutions, and partnerships for the goals.

Nadia Chekhia

During a discussion on youth participation in internet governance, two speakers shared their perspectives. The first speaker, who is responsible for coordinating the youth activities of the European Regional IGF, expressed doubts about how meaningful participation should be defined. They emphasized the need to reflect on this matter and to gain a better understanding of what meaningful participation truly entails. The sentiment of their argument was neutral.

On the other hand, the second speaker strongly advocated for integrating more young people from across Europe into the system of internet governance. They believed that it was crucial to provide youth with leadership positions to enhance their involvement. This approach aligned with the positive sentiment of the second speaker’s argument.

Both speakers highlighted their commitment to comprehending the concept of meaningful participation. They emphasized the importance of exploring this notion in depth and working towards implementing it.

The first speaker’s argument raised questions regarding the definition of meaningful participation, indicating a potentially critical analysis of the current understanding of the concept. The second speaker, on the other hand, firmly believed in the necessity of promoting youth involvement in internet governance and assigning them leadership roles.

This discussion on youth participation in internet governance sheds light on the varying perspectives within the field. It portrays the complexities involved in defining and implementing meaningful participation and highlights the importance of involving young people in decision-making processes. Such efforts can contribute to achieving the Sustainable Development Goals, including SDG 4 (Quality Education), SDG 5 (Gender Equality), SDG 10 (Reduced Inequalities), and SDG 16 (Peace, Justice, and Strong Institutions).

Session transcript

Yulia Mournets:
All right, so good afternoon and probably good morning from the very early Europe. I will be, as I said, moderating this session online together with you. We do have our colleagues on side, namely Levy, in case if something happens, he will be able to help and to jump on that moderation. So I’m Yulia Mournets. I’m the founder of the Youth IGF, and it’s an absolute pleasure and a privilege to have this open forum together with the EU delegation to the IGF and the members of the European Parliament. I would like to welcome them. We do have other speakers present in the room as well. So I would like to say hello to Christian-Sylvie Bouchoy. You are the head of the European Parliament delegation to the IGF. Mr. Bouchoy, maybe you can introduce the members of the Parliament present in the room and with you, so we can see and we can know who is present and who we can address. We also have with us a few young people and leaders from the Youth IGF, together with the representatives from the EU present on stage. And I believe we do have also the European Commission present in the room. Mr. Piers O’Donnor, I don’t know if Piers O’Donnor could also join the stage so we can have this conversation. Mr. Christian-Sylvie Bouchoy, please, maybe you can introduce the delegation.

Christian-Sylvie Bouchoy:
Good morning for Europe. Good afternoon in Kyoto. We are very honoured to be here today in a, I would hope, very interesting discussion with the Youth IGF. We had also last evening a bilateral discussion with a few of the young people very much involved in IGF activities and we understood a little bit better some particularities in some parts of the world. But for the moment I would like just to introduce the colleagues. So I’m Christian Bouchoy, I’m a member of the European People’s Party, President of the Industry Research and Energy Committee coming from Romania. I have also from the EPP Party also and coming from Slovakia, Mr. Stefanets. Thank you very much. I have from Renew, coming from Denmark, together with us today, Mr. Lokegard. Also from Renew Party, coming from Slovenia, Mrs. Irena Eleva. From the left, coming from Cyprus, Mr. Niazi Kizilure. And coming from France, a member of the ID group, Madame Marie Doshi. Madame Kumpula Natri, member of the Socialist Delegation, will be here in five minutes maximum. Maybe I will not have the chance to introduce her then, but I’m sure that for our colleagues, because they are involved in different legislative dossiers on different areas, if there will be some specific questions, please allow me to invite at some point each of my colleagues to intervene to give brief answers or brief comments to the issues that we are supposed to discuss. I will finish by saying that the European Parliament is strongly committed to support IGF activities. We participated in most of the IGF forums. For me, it’s the second time, but for Mrs. Kumpula Natri, for instance, it’s the sixth time. And there are colleagues also that are veterans in participating on behalf of European Parliament to IGF forum. Also, together with some other chairs of different committees involved, I initiated a letter to President Roberta Metzola to have a permanent working group on IGF in European Parliament, with colleagues not necessarily part of different delegations, but to have a permanent working group on IGF. We are very committed to the multistakeholder approach. We would like to understand better the realities and particularities of some parts of the world. Apart to have the best legislation, as you know, European Parliament already adopted legislation on data governance, on Digital Markets Act, on Digital Services Act. Also, cyber security, we have strong cyber security legislation, not also institutions, but also critical infrastructures. And now, European Parliament in June adopted its position on Artificial Intelligence Act. I know that is a lot of interest for many participants to IGF, and it was also mentioned in some speeches, the European model on looking on the risk assessment. And now we are in the inter-institutional negotiations with the Council, with the Member States, and the participation of European Commission, and I see here a Director in European Commission and his team, to upgrade, to improve, and to have the final legislation on Artificial Intelligence. And of course, we are looking forward to develop other legislation on digital area and Internet, and this is something that could be discussed in the coming moments. Thank you once again for having us here.

Yulia Mournets:
Thank you so much, Christiane Sylvieux-Bouchoy, for this introduction, for the introduction on the European Parliament delegation priorities for this IGF, but also for the information about this working group on the IGF. And we do hope, actually, that this working group that potentially could be approved by the President Metzola can have a channel also of communication with the youth IGF and with the young leaders. And by saying that, I would say that you practically made the introduction to the subject of our discussion today, because what we would like really to discuss in a very tangible manner, it’s a kind of framework, why not this group might become that framework, or part of that framework, on the sustainable participation of the young leaders in the discussion and in Internet governance. So we would like to propose you this very short open forum to be structured in the following way, to have three parts of it. So to discuss practically the recommendations the youth IGF and the young leaders made to digital compact, but also which is based on tangible examples. Namely, we would like to discuss the examples on how actually one of the recommendations was able to be already implemented. This second part, we would like to discuss about the declaration for the future of the Internet and the participation of the young people in this process. And the third one, it’s the Hiroshima process, and also what will be the say of the young people in it. So by saying that, the youth IGF made these six recommendations to digital compact, and one of the recommendations that was part of that, and we discussed the previous years, and I believe your colleagues from the European Parliament were present with us. Kumpula Natri was the chair last year and present, and where we discussed this recommendation. So one of the recommendations was to establish a kind of youth advisory committees within the private sector structures. So to advise and really work together, advise on the young leaders and the youth views on different policies within the corporate structures. So by saying that, the same year, the EURID, which is operating the .EU, together with us, took this opportunity and we established together with the .EU, the youth IGF and EURID, the youth advisory committee. And that’s quite unique, I think. We do know about another example, which is in the Russian Federation, and another example probably is in Australia. So that’s quite unique, and I would like to call very quickly on João. João is present in the room, I believe, or even on stage, I see him. So he was a member of this first youth advisory committee. He’s still a member of this youth advisory committee to the .EU, to EURID. And I would like, João, to ask you a very straightforward question. And please be short, because we have a very limited time today with the subject we would like to discuss. So could you share with us your experience in terms of what you were able to bring to EURID and what actually EURID brought to you? Very shortly, João, please.

João Pedro:
So good afternoon, everyone. I’m João Pedro, I’m coming from Portugal. Hi, Julia, online. So I hope things are not too early there. About EURID and about the youth committee. I think it’s actually, as you mentioned a little bit, so the idea of having diversity is always good, and including the voice of the youth and including it even within private and other institutions, it’s important. But I think it’s relevant to think about how can this be a value proposition for both sides. It’s been the challenge of many other projects I’ve participated, but I think it’s something that works at the youth committee for EURID. So the idea, and so far, we’ve been able to pitch and present opportunities of Internet governance to EURID. So the idea from the beginning was to advise and provide feedback on the Internet governance ecosystem to EURID. I think there’s a lot of things that we could continue and start doing. Moving forward, maybe taking a look at policy from within the EURID structure, evaluating, for instance, strategies in terms of the dissemination of the .eum in the different regions of Europe. Contributing to the activities of EURID that are not really part also from a business perspective. It’s been in the end where we’ve been providing more value, I would say.

Yulia Mournets:
Thank you, João, and thank you for being actually indeed short. Thank you for this statement. We will ask the members of the European Parliament to comment on that initiative, what they think, how it can be implemented in general. But I would like to turn to Regina Fuxova. Regina, you do work with the .eurid. You arrived at, probably when you arrived to .eurid, but maybe I’m mistaken, when the Youth Advisory Committee was already established. So I wanted to ask you, what is the impact for EURID, for .eu, of this Youth Committee, and how can we make this idea of establishing Youth Committees, and particularly the Youth Committee within EURID, working in a sustainable manner?

Regina Fuxova:
Thank you very much. Yes, I will try my best. Thank you very much, Julia. My name is Regina Filipová Fuxová, and I work for the .eu registry. Actually, my inventory number in the company is quite low. I arrived already in 2007, so well before the Committee was established. But due to some changes in our company, I ended up as an Industrial Relations Manager last year. And happily, in my portfolio, I also got this honor to be the liaison for the Youth Committee. We have now a year of change. We are now in the process of onboarding new members, and we will be present, among others, also at the .eu Day, which takes place on the 16th of November in Brussels. So those of you who are going to participate, and we would really be pleased to see you there, you will have also the opportunity to speak not only to long-year members, but also to new members, and they can share their impression. The Youth Committee is a very, let’s say, well-established part of our corporate governance. We are trying to include the youth and young people not only in this committee, where the aim is to inspire each other and spread the word about the activities for us to get a fresh view for the members, hopefully, to get new impulses for their future careers in and beyond the Internet governance field. But we also offer activities for smaller children at basic school within another activity of the European Commission, the Code Week.eu, which is happening now in October. And we make always use of our presence in different member states. We have four offices, so we are reaching to local communities, at least in those countries. And then for high school students, we have an art competition called Safe Online. Those of you who visited our booth could see the very nice artworks. I just brought an example of one of them, taking into account, or not only taking into account, they were created by teenagers 15 to 20 years old, and you would not get better results from professional designers. We were really amazed. And we use this as an opportunity to start discussion at the school with the teachers, but also indirectly with the parents about cybersecurity, Internet governance, what does it mean for them. We offer an introductory presentation to start this discussion. So from our perspective, it’s very enriching also for our further development, and even though it’s not directly connected to generating, let’s say, some growth or hard numbers, the spreading of awareness is an important part. And we try, because we are a member of a technical community, so it’s not very typical, and we spread these best practices among others on a central level with other peers and try to inspire them to go this way as well. Thank you.

Yulia Mournets:
Thank you, Regina. Thank you for sharing the experience and updating on the activities of the Youth Committee. And for a number of people, I can imagine, present in the room and online, also just bringing the information that that exists. I do like to call on the members of the European Parliament, Mr. Bushoi, or someone else from the delegation who would like to actually comment very shortly on what you just heard, on the experience, and also to maybe raise a question on how can this be implemented at the largest scale, but also why not globally. That’s a great example, great example coming from Europe, from the EU. So what is your opinion on that, Mr. Bushoi, or someone else from the delegation?

Christian-Sylvie Bouchoy:
Maybe the youngest member of the delegation, of the European Parliament delegation, Mrs. Irena Joveva, as I said, from Renew Group from Slovenia. And I would also like to welcome, he joined immediately after I made the presentation, Mr. Sergei Lagodinsky from the Greens from Germany. Mrs. Joveva, please.

Irena Joveva:
Thank you very much. It’s okay if I stand here? Or do I? Okay. So hi, once again. I’m actually also the youngest one who was elected in Slovenia, which is a little bit sad because it took such a long time. And I was 30, you know, I wasn’t even that young. So I’m really very, very happy every time I have a chance to speak with younger generations. And I’m also very happy that the European Parliament as such is getting younger, if I may say so, because it’s always a nice mix to have experienced members, but also, of course, the younger ones have to be involved and included. So I really appreciate everything that you told and the examples and experiences. It’s very nice to hear that. I think that the younger ones could be more included regarding the European institutions or their work as such. I try to include them as much as I can, you know, regarding my work or the fields that I cover, if I may say so. I also, I mean, this is my first IGF, so I have no idea how it, I have nothing to compare it with the previous ones, but it’s really nice to have also the young IGF, you know, emphasizing or being part of it like so, so concretely. I have maybe, I mean, my main, I was a journalist before, before I came into politics. So my main topics are actually the last few weeks or even months, the Media Freedom Act that we prepared. We actually adopted the European Parliament stance. I think it was on this last October plenary session. Yeah. Okay. Did long days in the timeframe and everything. Yeah. So we are also starting with the inter-institutional negotiations here. So it’s also, you know, connected with the internet, obviously, especially the disinformation and misinformation that we are all, you know, dealing with more or less. So maybe here a question or a comment from you guys would be helpful, because of course the younger generation is the most exposed since being the most also on the internet or the digital world is the most, is the, I mean, the physical and the digital are the same more or less now, I think in these times. So I feel that the digital literacy as such is underestimated. I’m not feeling that we are all aware how important this is. So maybe I would have a question here. How do you think or how do you feel about it? Do you think that the schools should do more or the politicians? should do more. I mean we try to do as much as we can but of course at the end of the day we have limited, I mean we are limited with the member states or whatever. So this is my question and if you have any questions regarding the media freedom or disinformation or something I will be happy to answer. Thank you. Thank you. If you allow also a short intervention from

Christian-Sylvie Bouchoy:
Mr. Stefanets, EPP Slovakia, one of the senior members of the delegation but very close to the youth movement. Thank you, Chair. Great to be with you today.

Stefanets:
Also senior people have to take inspiration from young ones definitely and digital future is about young people so it’s great to be with you. But if the question is how to make our contacts more regularly, how to establish our cooperation, I think it’s possible to organize even special events on different topics at the European Parliament so we are welcome. So that’s the part of the answer, number one. Number two, we have already regular meetings which are organized also by our European People’s Party. It’s so-called youth forum. It is in September. It’s one week where young people can come to the European Parliament, they can present their respective ideas and there is most ideas about digital future, I can assure you. So there is a lot of inspiration talking about new legislation so we can get also more inspiration from you. So you are welcome to participate at youth forum, you are welcome to come with new ideas which we can develop at special events and also the answer number three is we can organize also special event between us and between youth IGF. So that’s possible also to do it relatively soon in two, three months in the European Parliament. In terms of contents, particularly I’m working on the digital decade development, I’m working also on the addictive design right now and also on online children protection. So if you are also working with children maybe we can get more inputs also how they see, what is dangerous from their perspective and all the inputs also are welcome. So looking forward to participate with you and to cooperate with you. Thank you very much. Back to the

Christian-Sylvie Bouchoy:
moderator, I think maybe someone would like to intervene from the youth

Yulia Mournets:
participants. Thank you, Mr. Mbesho. We will maybe just follow and we will give the opportunity to the young people to intervene in a couple of minutes. I would like just to thank you, thank the interventions from the members of the European Parliament and indeed the youth IGF. We do work with children and on the question of child protection because we were one of the organizers and founders of the child online protection initiative of the ITU. And as well, the youth IGF established more than 10 actually safe internet committees outside of Europe. So mainly in African countries working with our African friends. And so that’s the information we’d like to share with you. And indeed, will be very interesting to organize this topic based debates. But I would like just to be back to the EURATE example and to follow what Regina Fuxova mentioned, the .EU days and actually what we established as well together with the .EU. It’s a special category of the .EU award, which is going for and it’s focused on young entrepreneurs. And that’s also a second achievement, I would say, in a very tangible manner that came from one of the recommendations of the youth IGF to the digital compact. But let’s go to the second topic, which is the declaration of for the future of the Internet. And as you know, as you might know, the youth IGF also actively participated in a number of meetings, namely in the Czech Republic under the presidency of the Czech Republic. And I would like to turn to Peter Sedona from the European Commission. And Peter Sedona is the director of the Future Networks Directorate at the European Commission. Peter, what is your opinion about the what are the priorities, first of all, of the European Commission on the day five? But also what actually what is your opinion in terms of the impact that the young people brought to the day five during these, you know, different meetings where we took part and participated on site with the European Commission and, by the way, invited by your colleagues? Thank you. Is this on? Thank you. Good afternoon.

Peter:
Hello, Julia. Thank you very much. Thanks for this opportunity. Indeed, on day zero of IGF, we had a very successful half-day long workshop on the Declaration for the Future of the Internet, and successful from several points of view, partly because we had a number of countries that had not previously been closely associated with the process who were present for the plenary session, but also for a breakout session among governments, and who then brought that back into another session at the end, where it was clear that there was strong support for the principles. There are five principles in the DFI, but that process was also aided by the fact that we had rapporteurs from the youth IGF, who acted as animators and reporters back to the plenary session, who showed the involvement of youth, but also the interest of the youth community in what is being done in the DFI. If I could, without going into too much detail, if I could summarize it as follows, is that while it is a track which is led by governments who sign the Declaration for the Future of the Internet, the purpose is to pull them more closely into the multi-stakeholder process of which they are already a part, and specifically into the process of the IGF, where we see all of the other communities already working hard, in order for them to be made aware of the concerns and to be guided by civil society, academia, but of course the other members, including of course the business segment, so that when they come as member states of the United Nations to negotiate the GDC, they will be fully aware of that input. And the input from the youth IGF is actually very much what has been said by the members of Parliament here, is that we’re talking about digital future, so we are talking about what is of most concern to the young people today, and where we, and I speak as one of the greyheads in the room, we have an obligation to ensure that we make a governance system that works and that answers, responds to their concerns, to your interests, because you’re here in the room and online, and we want that process to continue, so that when we talk about human rights, for example, or when we talk about more process-oriented things such as internet governance itself, that you are fully aware of what’s going on, and using the network which youth IGF represents, you can actually inform your members as to why it is important that they are, at national level, talking to your governments to convince them that they should sign up to the Declaration for the Future of the Internet. If they are to state that they believe in democratic principles, if they believe in maintaining human rights and the human-centric nature of the internet, then clearly this is a forum which will allow them to speak with like-minded countries, but also to influence the outcome of the GDC process and the WSIS plus 20. I’ll stop there, thank you very much.

Yulia Mournets:
Thank you, thanks. Thank you, Per Seudonor, for giving us your opinion, the opinion of the European Commission and the EFI, and a note taken on how we can help and assist more with the EFI and the implementation of the EFI as the youth IGF. I would like now to turn to Nathalie, we have Nathalie Terkova, I think, apologies for mispronouncing your name, from the Czech Republic, and I would like to ask you, you know, in the EFI we can find the recommendation on the collaborative research, you’re a young researcher from in the Czech Republic, so what kind of opportunities you would like to see for you within this EFI as a young researcher? Very shortly, Nathalie, please.

Nathalie:
All right, thank you so much for the question. It’s actually surprising that I’m also a PhD student focusing specifically on digital literacy and digital skills of children and adolescents, so it would be a pleasure to also have more conversations later on, but to keep it short and to answer the question, I would like to maybe highlight a few research opportunities that I personally believe would be very much needed. Firstly, it would be the need for comprehensive online risk assessment, because we, I believe, we must delve into the emerging online threats and vulnerabilities affecting children to inform policies and industry standards to really help protect their child rights online. And secondly, as we are just highlighting this a lot, the multi-stakeholder approach, and those like collaboration studies, I would say are vital, because governments, the companies, academia, and educators as well, and civil society, they must work together effectively to reduce online risks for children. And research can really help us to assess the impact of these collaborations. Thank you.

Yulia Mournets:
Thank you, Natalie, and thank you for being so short in it. That’s appreciated, so that allows us to have a discussion afterwards. I would like now to go to our third point. But before, I would like to quote the Member of the European Parliament, Brandon Bénéfé. I don’t know if Brandon Bénéfé is with us in the room. If not, we would like to quote him from yesterday, actually, and he said on stage, if you do not engage, they do not care. And it is also the duty of institutions and society to give them instruments to be involved. But if they refuse to do, then others will do in their place. And I think that’s actually a good summary of what we are trying to reach and discuss here today. Really, what are the instruments that the European Parliament is that the young people can have in order to be fully involved and bring you the tangible examples? And I would like to go with that to the third point of the Hiroshima process that has been discussed a lot during this IGF 2023 in Kyoto, we’re following online, and Levi, present from the Youth IGF from our team, was following on site as well. So I would like to ask the members of the European Parliament, you know, the vision of the year, well, to ask the question on the vision of the European Parliament on the Hiroshima process on the generative AI, but also how do we how do you see the role of the young people globally to be involved in this Hiroshima process and its potential implementation? I don’t know, Mr. Boucher, who would like to maybe yourself or other members of the delegation would like to… Yes, I will just say two sentences and

Christian-Sylvie Bouchoy:
then I will invite two colleagues very knowledgeable on this, Mr. Lagodinsky, who’s a shadow and he will be involved also in the negotiations, and Mia Petra Kumpula-Natris, who was vice chair of a standing committee on artificial intelligence, so I will kindly ask them to briefly comment and explain a little bit European Parliament vision. As I said in the introductory remarks, I’m very happy to see interest, a lot of interest about European initiative on artificial intelligence, the risk-based assessment, but also what European Parliament issued as position. The most important is the fact that we will not accept that artificial intelligence should be used in mass surveillance. We have issues about biometric, using biometric artificial intelligence usage, and in the same time, of course, not banning generative AI, trying to regulate. Of course, we looked mainly to the safety, to the ethical aspects. ITRE committee, the committee that I’m chairing, also gave an input related also to the business case, to the growth that artificial intelligence could bring to economies, to companies, to individuals, also some benefits, but in the same time, we’re looking to the challenges and how it can tackle these challenges, and if I’m not mistaken, we are the first continent regulating in a way, or starting to regulate, because I’m sure that this will evolve a lot in the coming years, the artificial intelligence processes, and I know that there are other approaches. Yesterday, we met the US delegation to the IGF. We understand a little bit better the vision in the United States. With the members of the African Parliament, we understood a little bit how they look and how they could be inspired by what the European Union is doing. You mentioned the Hiroshima process in Japan, and even the Prime Minister of Japan mentioned the interest and the concern, and also the vision of the great nation of Japan related to this. So clearly, participation in IGF also inspired us a little bit more on how we can address artificial intelligence legislation. But two short interventions from colleagues. I start with Mr. Lagodinsky. You have a microphone here, and then I kindly invite

Mr. Lagodinsky:
for a short intervention, Miapetra. Thank you. It’s great to be here, and a great question. We are now, as you know, in the middle of a trilogue, which is a negotiation between European Parliament, European Commission, and the Council, so the member states. The position of the Parliament is clear. We do not want to ban artificial intelligence, as some were misled to think. We do not want to ban foundation models or generative AI, whatever term you want to use. We would like to regulate them, and most of AI is not regulated at all. We have a high-risk approach where the regulation only is on special high-risk applications and on generative AI. We can discuss about how this regulation takes place. We, of course, understand that we will not advance if there is no international cooperation, and from that perspective, Hiroshima process is a great step forward. However, for many of us, the concern is that by using Hiroshima process to place our dialogue with providers and with developers, solely on the basis of code of conduct, we will distract us from the real regulation of this technology, and this is something that citizens expect. There is a lot of unease, and you know the letters about banning AI or putting moratorium on AI. We do not want to go that far, but we want to give people and to give our partners in Africa, for example, who are watching very closely what we’re doing, to give them orientation that would be in the realm of regulation, not over-regulating, leaving room for innovation, but also innovating in laws, and this innovative approach is something that the Parliament presented. As I said, and we can discuss this if you’re interested, we are in the process of meeting together and kind of balancing out our view with the view of Member States who would like to be very careful. They want to protect, of course, small and medium enterprises from Europe, who would like not to be regulated, understandably. Well, we’re saying we have to walk a middle ground, because it’s not just about industry, and it’s not just about businesses, it is also about fundamental rights, it’s about environmental standards, and it’s about our view of how we place human being in the center of this innovation. I will stop here, happy to discuss this, and I will pass the word to the colleague. Thank you, thank you Chair, and thank you all young participants.

Collegue:
Very short also, the Hiroshima, I think, links well with the AI Act that European Union is doing, because we are not isolating ourselves from the world. We will look forward to having the multidisciplinary approach, and also enjoy discussing here on the multi-stakeholder forum. I must say that also in AIDA, the AI in the digital age, the special committee we had, we worked for one and a half years, with all the possible stakeholders, more than 100 specialists were talking to us, European parliamentarians, and then Commission gave the proposal, we were mature to look what we want, and it is to continue the trails that we have had in the Europe, that we want Internet to be human-centric, what is built on the Internet should be human-centric, and respect to fundamental rights. And sometimes, coming from this innovation and industry committee, and I did also work for this legislation, there we are not innovating out, we are giving some quadrilles, like a framework, because then often somebody said, why are you regulating anything on AI? I said, what are you planning to do with AI, if you are scared that high rates might have some rules? So make a comparison, like medicines, if you doctor say that take that medicine, do you want to first sign consent? Maybe, your risk, but if you swallow the bill. So we are kind of wishing that AI systems on the market have some risk-based security analysis, and keep it globally open, so that we can have interoperable systems, and so on. So this is one more aspect on that. So Hiroshima having more G7 countries to work together, we have been using OSCD, and whatever, UNESCO, all other defining how, what is the AI actually, do we try to give some framework legislation on. So I welcome on that sense, that there are more countries taking it seriously, and looking for the better future with less threats. Thank you very much. Back to the moderator.

Yulia Mournets:
Thank you, and thank you for helping us with the moderation. Yes, we feel that you are here with us, but sometimes it’s good to… The sun is coming, we have the morning and the sun coming, so we are with you right now. So thank you so much to the members of the European Parliament for briefing us and bringing the detailed information on the processes going on, on AI, from the legislative and policy perspective. I would like to turn now to Ananya. Ananya, you are on the stage. You are from India. You are the young leader. You participated in a number of different programs, capacity building, and et cetera. And Ananya, your interest is on AI. One of your interests is on AI. And I would like to ask you, how would you like to see young people to be involved in the implementation of different regulations all over the world, or at least to have your say, but to have your say on a sustainable manner, I would say. Ananya, you have the floor.

Ananya:
Thank you very much for inviting me today. I am Ananya. I am the youth advisor to USAID Digital Youth Council. As a young person who is very closely involved in the design and implementation of the Digital Strategy of an international agency like the USAID, here is how I think young people could be and should be involved in the implementation of the Hiroshima process. First of all, I want to emphasize that it is crucial that young people are involved in any process related to anything digital. As a generation of young people who are born into a digital age, digital technologies influence our aspirations, ideas, and lives from the very moment we are born. Hence, I strongly contest the usage of the word future stakeholders for us young people, because in the digital context, young people, due to the ubiquitous penetration and presence of technology in their lives, are nothing less than equal and current stakeholders. Therefore, they must be provided a platform right now to share inputs on processes and policies which will massively influence their lives, like the Hiroshima process. And I will very briefly enlist three things that I would want to happen. The first one is to ensure that young people from diverse backgrounds, socially, educationally, and geographically, have a seat at the table, like you see right now, because I’m not from the European region. So thank you very much for walking the talk. We should make young people parts of decision-making bodies, like I heard about the EURID body, committees or working groups related to the Hiroshima process. I would also suggest hosting consultations, youth summits, site events, networking sessions, conferences, like IGF, exhibitions, educational programs, and other such community engagement activities, which enable the youth to share their ideas, their experiences, thoughts, suggestions, projects, and initiatives that align with the goals of the Hiroshima process. But I would insist that any such fora be held at local, national, and international levels. So as to make the process more accessible, inclusive, and glocally relevant. Second, as young people tend to be one of the biggest consumers and producers of social media content, it would also be a very good and feasible idea to indeed leverage digital platforms and social media to engage young people in the Hiroshima process. We could do that by creating interactive online spaces and use social media campaigns, hashtags, online events, like webinars, to raise awareness and mobilize support from and with the youth. And finally, it is always important to acknowledge, just because we are young does not mean that our contribution was not instrumental enough. Hence, we must recognize and celebrate the contributions of young people to the Hiroshima process by highlighting their achievements, stories, and initiatives through awards, scholarships, or media coverage. This will also inspire any other such future processes to include more young people in the dialogue. I’ll just briefly end this by saying that there is nothing for us without us. And thank you, Yulia, again for inviting me.

Yulia Mournets:
Thank you, Ananya. I think we have to thank you for a strong message that probably has been taken. And thank you for this strong voice and for tangible proposals, I would say, to what can be done. To continue on that strong and positive, but I mean, our open forum, it’s generally positive. But to continue on this strong youth voice, I would like to give the opportunity to other young people to raise their questions. And maybe afterwards, turn to our leaders, to the members of the European Parliament, and the European Commission, and the private sector to comment and to answer their questions. Would you have Levi on stage? Levi, you’ve been very patient. Thank you for staying with us. You are from Zambia. Levi, do you have a question? Or maybe you would like to raise an issue to the members of the European Parliament and other senior leaders present in the room? Levi.

Levi:
Thank you, Yulia. So, let me make a comment and then I will pose a question to the European delegation and people in this room. I’m Levi Siansege from Internet Society Zambia chapter, but I also head the Zambia Youth IGF. Now, there have been concerns about AI, misinformation and disinformation. And quite honestly, to some extent, misinformation sometimes actually has been perpetrated by certain government officials. In my region, in Africa, I don’t know about the European Union, but I’d like to believe in certain cases it’s been so. Misinformation or disinformation, right? Now, three quarters of the internet is used by the youths. They have a lot of innovations. And one of the European delegate made mention that it’s also nice to get inspiration from the youths. Them being majority of the users of the internet, I think they have quite a number of solutions to some of the world’s problems. And having less interaction or involvement in making decisions or policies about internet governance and internet laws, to some extent, I feel it becomes unfair. And if we’re talking about the future of the internet, then what future are we building if we don’t get their involvement to the most? So, just building up on what Ananya said, my question then would be, how deliberate is the European Union in terms of ensuring sustainability of youth engagement in policy and governance related issues, especially when it comes to technology and the internet space? How deliberate is the European Commission ensuring that there’s sustainable engagement in as much as they are learning from the youths in the European Union, but across the entire continent or across the entire globe, how is it deliberate about being sustainable in engaging the youths to make these decisions? Because youth internet governance forums are one of the platforms where you can actually get to see what the youths are thinking about and some of the innovations they have regarding how they envision the future of the internet. But if we are creating a sustainable future for the youths, how about being deliberate in engaging them? I end there. Thank you.

Yulia Mournets:
Thank you. Thank you, Lenny. Thank you for the question and statement. We’ll be going in two minutes sharp to the European Parliament delegation for their comments and discussion. Just would like to take another question online because we try to be inclusive as much as possible. And indeed, we need to encourage this online participation. We have a question from Muhammad from the Pakistan. Muhammad, you have one minute, please. You do have a question. Please do raise your question and be in camera. Thank you, Muhammad. You have the floor.

Muhammad:
Hello, and thank you so much for having me. I hope I’m audible. You do. Please continue. Yeah, so I’ve been listening to the discussion for a while now. And I’ve seen that the honorable members of the European Parliament, as well as the fellow… As for my impression, first of all, my name is Muhammad Amarali and I’m the Generation Connect Youth NY for the Asia-Pacific region with the International Delhi Communication Union. My question has been that the members of the committee have been discussing about the artificial intelligence and the cybersecurity. And the youth fellows have been discussing about the inclusion of youth in the overall process. One particular aspect of the internet governance, and that has been my core area of interest as well, has been the digital governance and the cooperation sector. And the question here would be that how do the stakeholders look into the participation of youth within the digital governance and the cooperation sector? Because as Ananya mentioned that the youth are the current stakeholders. And of course, they are the future stakeholders as well. They are the ones that are going to embrace this change of digital transformation in the first place and then passing it on to other generations. So is there a mechanism in place or in the pipeline where youth would be included in the digital governance infrastructure as well? Thank you.

Yulia Mournets:
Thank you, Muhammad. Thank you for your question. Thank you for being short. Now we would like to turn to the members of the European Parliament for your comments. Kristian Silviu Besoi, the floor is yours and the floor is in the hands of the members of the European Parliament. If you want to comment.

Christian-Sylvie Bouchoy:
Thank you so much. The two questions are very much related and they’re about the main issue discussed today, the need to stronger and more clearly involvement of young people, of youth, when decisions are made related to digital legislation, but also to future artificial intelligence. So how sustainable is the youth participation? You heard Director O’Donoghue, you heard colleagues, the will is very clear at the EU institutions. Youth were very much involved until now via the consultation processes, via the participation in different formats and initiatives of political groups in the European Parliament. Of course, some of the members of European Parliament, as you saw today, are young people very much connected to the youth, but I think we should do more. We should do more in order to have maybe a better coordination and formal consultation and formal dialogue and IGF youth could be permanent and formal stakeholder for European legislators. Very short intervention, if you allow, because Mr. Brando Bonifay, the colleague that also was quoted and our reporter for artificial intelligence joined us. He was in other meetings here because there are many meetings, very interesting for our colleagues. Just also to comment on this, if any other colleague that did not have the chance until now to take the floor, would like to say something also very shortly, please make me a sign. Brando, please, you can use that microphone or you can come here. It’s how you would like to come there.

Brando:
So I think this topic has been quite relevant for you. I’ve seen that from my speech on the first day, UIGF took exactly that excerpt for the social media when I mentioned the need for the involvement of young people in the design of these policies and also in their governance and enforcement. That’s exactly what we are trying to do with the AI Act because we have included a clear in the AI legislation at European level that we are negotiating. We included a clear reference to the need for a permanent involvement of stakeholders, not just academia and business, but also civil society, including young people. This is, I mean, we are still negotiating, but in the parliament text that we are defending in the negotiations with the member states to get the final text of the law, it is quite extensive because it’s considered to be needed to do this kind of consultations for many steps of the application of the law, but also for its update. Also, and I hope this can be a model that we continue in this direction. I agree with what colleague has said that UIGF should be an important permanent interlocutor for this work because more and more, it came clear also in these discussions in these days that the international dimension is crucial. Also, our work, more legislative, domestic European work as a necessary international global dimension, it comes very clear. Hiroshima process was mentioned earlier. We are working, thinking of it, of this global effort. And so it’s crucial that we have a global youth involved. In fact, and I conclude on this, I must say as a politician on a broader reasoning that it has been extremely important, extremely helpful that young people have mobilized so much for climate issues. They have driven the debate in a way that I think was very important. I think we need to find the ways, so it might not be the same, but the ways to mobilize more young people, more youth associations, also around the, we can say the tension and the possible outcomes, possible outcomes of the tension between democracy and the new technologies because I think this is the point. Thank you.

Christian-Sylvie Bouchoy:
Thank you so much, Brando. I know that we are almost done with the time and I see, I think someone from the organizers coming and maybe reminding us that, yeah. We have three minutes. We have three minutes, but we have two people queuing, if the moderator allow me just to, because maybe she doesn’t see them in the room, queuing to the microphone.

Yulia Mournets:
Let’s try to take one question maybe. Maybe we have two youth in the room to take the presentation. Two short questions. We can take them together and maybe address them. Please ask the question without making a statement because we don’t have any more time. So please be very, very short where you have one minute.

Herman Lopez:
Yeah, state your name also for the record. No, well, thank you very much. This is Herman Lopez from the standing group of the Internet Society, part of the board. It’s great to see that also India and Africa are involved in the discussion, but how can we from Latin America also get involved into these discussions about AI? Because we, like you see people from all over the world, but Latin America, so thank you very much.

Yulia Mournets:
Thank you. Let’s, shall we answer this question first or not? So we take the both and then I will, yeah. Good afternoon. Very short.

Nadia Chekhia:
My name is Nadia Chekhia and I, I’m short. And I represent the European Dialogue on Internet Governance. I coordinate the youth activities of the European Regional IGF. And my question to the MUPs is, what is your definition of meaningful participation? At YouthDIC, we’ve been working on what does that mean and working with youth stakeholders to understand how are their activities. And we are asking you to reflect on this. And for this, we do have a publication that we would like to invite you and also to invite you to EurDIC to reflect on how we can have more youth participation of youth around Europe and allowing them to integrate into the system and take on leadership positions. Thank you very much.

Christian-Sylvie Bouchoy:
Thank you. Yulia, if you allow me just to give two short answers to the questions and I will stand because maybe it’s, on Latin America, clearly there is a lot to be done together, to work together, to be inspired by each other. You know that European Union, European Parliament have strong ties with countries from Latin America and also with Africa. We are happy to meet yesterday members of the African parliaments. I’m not sure if there is a network also in Latin America comparable that could be a interlocutor on the internet governance and digital artificial intelligence issues. If there is so, it will be good once we have a permanent working group in European Parliament to start working. And the youth dimension, of course, could be extremely important. On involvement of European Youth IGF, and I see that you made the first excellent step. You discussed with the Director O’Donohue because European Commission is the key player and I kindly invite you and encourage you to take into very much serious the consultation process of legislative acts. You know that European Commission is doing that. It’s a public consultation. The stakeholders are very much involved and the Parliament only asked European Commission and we saw there is a strong commitment there and a strong will to take into serious, to take on board the relevant and important comments and of course, inputs on this. So this would be when the legislation is prepared, the public consultation phase. Then of course, talk to the co-legislators when we amend, modify, improve the proposal of European Commission. You saw the openness of colleagues. Go directly to the rapporteurs, to the coordinators of the political groups in the relevant committees and also once again, once we have the permanent IGF working group together with other initiatives and other interest group or intergroups and of course, the relevant committees. You have many interlocutors and I’m sure that will be very open to your participation and contribution, which is indeed very valuable.

Yulia Mournets:
Thank you, Christian, Silvio, Ushuai for answering these questions. Thank you. We have to thank you for all your interventions. We would like to thank all members of the European Parliament present in the room. We will probably once again, quote Mr. Benfeh afterwards with his proposal to have the permanent youth IGF legislative work together with the parliament as well as other members of the delegation on precise topics. With that, we have to end our session. I would like to thank all young people from the Czech Republic, India, Portugal, Zambia, Pakistan, Europe, other countries present in the room for your statements and questions. We’d like to thank the European Commission, Piers O’Donoghue for being with us today. And of course, .EU present in the room and all other participants. Thank you. Until next time, we’ll continue the discussions with you online or offline.

Christian-Sylvie Bouchoy:
Thank you. A special word of thank you for Julia, our moderator. Actually, I was the first to make in the proposal of the permanent representation. I’m joking. It is good that- Apologies, miss. Yeah, we are very committed to permanent dialogue and a special thank and appreciation for the moderation. During COVID times, I chaired the ITRE committee with colleagues in different European capitals with different presidencies. I know how challenging it is to moderate from distance and the moderation organization was excellent. And on behalf of European Parliament delegation, I would like to thank youth IGF and young people present today here or online. And we are really inspired by what we heard today and by their commitment to be part of the best solutions for the future of internet governance. Thank you.

Ananya

Speech speed

168 words per minute

Speech length

578 words

Speech time

206 secs

Brando

Speech speed

155 words per minute

Speech length

425 words

Speech time

165 secs

Christian-Sylvie Bouchoy

Speech speed

150 words per minute

Speech length

1946 words

Speech time

781 secs

Collegue

Speech speed

161 words per minute

Speech length

356 words

Speech time

133 secs

Herman Lopez

Speech speed

224 words per minute

Speech length

88 words

Speech time

24 secs

Irena Joveva

Speech speed

155 words per minute

Speech length

550 words

Speech time

213 secs

João Pedro

Speech speed

122 words per minute

Speech length

237 words

Speech time

117 secs

Levi

Speech speed

176 words per minute

Speech length

410 words

Speech time

140 secs

Mr. Lagodinsky

Speech speed

143 words per minute

Speech length

442 words

Speech time

186 secs

Muhammad

Speech speed

178 words per minute

Speech length

270 words

Speech time

91 secs

Nadia Chekhia

Speech speed

172 words per minute

Speech length

143 words

Speech time

50 secs

Nathalie

Speech speed

141 words per minute

Speech length

189 words

Speech time

81 secs

Peter

Speech speed

174 words per minute

Speech length

561 words

Speech time

193 secs

Regina Fuxova

Speech speed

152 words per minute

Speech length

526 words

Speech time

207 secs

Stefanets

Speech speed

151 words per minute

Speech length

303 words

Speech time

120 secs

Yulia Mournets

Speech speed

160 words per minute

Speech length

2717 words

Speech time

1021 secs

Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Dian

During Indonesia’s presidency in the Digital Economy Working Group (DWG) 2023, they placed a strong emphasis on the importance of digital skills. As part of their efforts, they launched three output documents aimed at improving digital skills and digital literacy. These documents include the Compendium of Framework of Practices and Policies on Advanced Digital Skills and Digital Literacy, the G20 Toolkit for Measuring Digital Skills and Digital Literacy, and a collection of policies and recommendations to improve meaningful participation of people in vulnerable situations in the digital economy. These initiatives demonstrate Indonesia’s commitment to equipping its citizens with the necessary skills to thrive in the digital era.

Indonesia also actively participates in the BUD Forum, led by the Ministry of Communication and Informatics. They carry out priority deliverables in this forum, further highlighting their commitment to the development of the digital economy.

One of the key priorities of Indonesia’s presidency in the DWG 2023 is to bridge the digital divide. By prioritising digital skills, Indonesia aims to bring economic prosperity and social inclusion on a global scale. They recognise that the digital divide hinders progress and are committed to ensuring that all individuals have access to the necessary resources and opportunities to thrive in the digital era.

Furthermore, Indonesia places great importance on developing a robust digital infrastructure. They understand that reliable and high-speed internet is the backbone of digital information and plays a crucial role in supporting economic growth and development. As such, Indonesia actively engages with international fora, including ASEAN and ITU, and seeks support from multinational entities to build and maintain a robust digital infrastructure.

In addition to these priorities, Indonesia also focuses on promoting e-governance and the digitalisation of government services. By involving the public sector in these efforts, they aim to streamline administrative processes, enhance transparency, and make it easier for citizens to access essential services.

Indonesia also recognises the importance of cybersecurity and data protection in the digital age. They collaborate with both the public sector and international organisations to establish data protection laws and enhance cybersecurity measures. This reflects their commitment to create a secure and trustworthy digital environment.

Another area of focus for Indonesia is digital education. They understand that digital skills are crucial for preparing the workforce of tomorrow. To facilitate this, they actively engage in public-private partnerships to develop and implement digital education programmes that train individuals in necessary digital skills.

Lastly, Indonesia emphasises the importance of inclusivity and cultural diversity in the digital space. Being a country with diverse cultural entities, Indonesia recognises the need for content in local languages and subsidising access to digital services. They strive to ensure that everyone, regardless of their background, has equal access to the benefits of the digital world.

In conclusion, Indonesia, under its presidency in the DWG 2023, is committed to advancing the digital economy by prioritising digital skills, bridging the digital divide, developing reliable digital infrastructure, promoting e-governance and cybersecurity, providing digital education, and fostering inclusivity and cultural diversity. These efforts demonstrate Indonesia’s dedication to harnessing the power of digital transformation for economic growth and social development.

Audience

During the forum, the individual made multiple requests to leave, expressing gratitude several times by saying “thank you.” The person also indicated their intention to say goodbye multiple times, using the phrase “bye-bye.” This suggests a polite and appreciative attitude toward the audience and participants. Although the reasons for wanting to leave were not explicitly stated, it can be inferred that the individual has completed their participation or has other commitments to attend to. Overall, their repeated expressions of gratitude and farewell indicate a respectful and appreciative departure from the forum. The individual’s gestures and words demonstrated a gracious and courteous exit, leaving a positive impression on the audience and participants.

Mr. Amano

Mercari, a popular peer-to-peer trading platform, is actively promoting a circular economy and expanding its global reach. With a customer base of over 20 million and gross merchandise volume (GMV) reaching 100 billion yen last year, Mercari is dedicated to reducing the disposal of items and encouraging sustainable consumption practices. Their initiatives include equipping high school students with digital skills and digital marketing skills through project-based learning programs. They collaborate with local educational institutions in places such as Wakayama and Kyoto, providing opportunities for students to sell local products on their platform. In addition, Mercari supports IT education and aims to increase the number of female engineers by donating to Kamiyama Tech College and conducting workshops for engineers and local communities. They also recognize the importance of hands-on interaction and dispatch specialist engineers to local schools and companies, facilitating real-life learning experiences. Furthermore, Mercari understands the global significance of implementing digital skills in local and developing countries, emphasizing the importance of communication and collaboration with top-tier engineers. Overall, Mercari’s commitment to sustainability, education, and inclusivity sets an inspiring example for companies seeking to make a positive impact.

Yamanaka San

The Japan International Cooperation Agency (JICA) is playing a significant role in capacity and infrastructure building across the ASEAN and Pacific regions. Last fiscal year, they had funding of approximately 1.2 to 1.3 trillion dollars for projects, demonstrating their commitment to supporting development initiatives in these regions. Moreover, JICA’s efforts go beyond financial support. They have trained 13,217 individuals and employed 9,163 experts and volunteers from around the world, showcasing their dedication to capacity building and knowledge transfer.

JICA is also intensifying efforts to integrate digital components into existing infrastructure. They aim to enhance cybersecurity measures and have partnered with the ASEAN-Japan Cybersecurity Capacity Building Centre (AJCCBC) to develop and strengthen cybersecurity capabilities across the ASEAN region, ensuring a secure digital environment.

Additionally, JICA is actively working to expand technological connectivity. They plan to lay fiber lines for the New Urban Information Infrastructure (NUI) project, a digital initiative to enhance connectivity in urban areas. To achieve this, JICA is collaborating with partners such as the United States and Australia, highlighting the importance of global cooperation in driving technological advancements and promoting connectivity.

Partnerships with private sectors are considered crucial in achieving technological solutions and supporting connections between companies. JICA recognizes that private sectors have valuable technological solutions and expertise that can contribute greatly to development projects. Collaboration between Inazians and Japanese companies is particularly emphasized to facilitate knowledge-sharing and innovative solutions.

Furthermore, the importance of having appropriate policies in place to support and foster innovation and ecosystem development is highlighted. The speakers argue that countries need a comprehensive approach that encompasses not only technology skills but also policy areas and digital skills to connect technology skills with the private sector and ensure a conducive environment for growth and progress.

In conclusion, JICA is playing a crucial role in capacity and infrastructure building across the ASEAN and Pacific regions. Their substantial funding, extensive training programs, and efforts to integrate digital components into infrastructure exemplify their commitment to sustainable development. Additionally, their emphasis on partnerships with the private sector and the need for effective policies underscores the importance of collaboration and a holistic approach to foster innovation and drive ecosystem development. Ultimately, JICA’s initiatives are contributing to the advancement of the regions and paving the way for a prosperous future.

Dr. Ran

The Association of Southeast Asian Nations (ASEAN) is recognizing the potential of a single digital economy, as it is currently the fifth largest economy in the world, with a market worth $3,000 billion USD. The region has a significant consumer base of 300 million people, which has been further amplified by the pandemic’s acceleration of the digital transformation in ASEAN.

Despite this positive development, there are challenges in the journey of digital transformation in the region. One major challenge is the varying levels of digital readiness among ASEAN countries. Some countries are better prepared for the digital transformation than others, which creates a gap in terms of embracing the digital economy. Another challenge is the issue of cybersecurity, with a significant divide between the lowest and highest performing nations in terms of cybersecurity measures. This poses a risk to the stability and security of the digital ecosystem in the region. Additionally, emerging technologies like AI and Cloud Computing are having an impact on the labor market, further complicating the challenges of digital transformation.

Another pressing concern in the ASEAN region is the urgency for skill development and training. It is estimated that 10-20% of jobs will be displaced by digital technology in the coming years. However, there is a shortage of digitally skilled professionals in ASEAN, resulting in a need for about 50 million additional digital professionals. This highlights the need for comprehensive measures to bridge the digital divide and upgrade skills in the region.

ASEAN is taking proactive steps to address these challenges. Various ASEAN bodies, such as SME, Science Technology, and Education, are setting up facilities to enhance digital knowledge and skills. The aim is to make currently unskilled workers relevant and train higher-level professionals to meet the demands of the digital economy.

Inclusivity in the digital economy is a priority for ASEAN. Efforts are being made to equip micro, small, and medium-sized enterprises (SMEs) with digital knowledge through initiatives like the ASEAN SME Academy. This will enable these enterprises to participate more actively in the digital economy and benefit from it.

Additionally, ASEAN is actively working on improving the logistics system and developing a digital payment system across the region. An agreement has already been secured to enhance cross-border e-commerce and digital payment using QR codes. These efforts aim to promote seamless integration and efficiency in cross-border transactions.

Addressing digital security is also a priority for ASEAN. Plans are underway to develop a system for digital ID or digital business ID, with the goal of creating an interoperable platform for businesses and consumers. This will enhance digital security and facilitate trustworthy digital transactions in the region.

In conclusion, ASEAN recognizes the potential of a single digital economy and is actively pursuing measures to accelerate the digital transformation. However, challenges such as varying levels of digital readiness, cybersecurity, and job displacement persist. The urgency for skill development and training in ASEAN is apparent, and initiatives to bridge the digital divide and upgrade skills are being implemented. Inclusivity in the digital economy, improvement of logistics and digital payment systems, and the development of digital ID systems are also key areas of focus. By addressing these challenges and embracing digital advancements, ASEAN aims to thrive in the digital age.

Daisuke Hayashi

The digital economy has great potential and is experiencing faster growth compared to the traditional economy. A 10% increase in internet adoption leads to a 0.5 to 1.2% growth in income, while a 1% increase in adoption of digital technology is associated with a labor productivity growth of 1 to 2.0%. The size of the digital economy is rapidly increasing and now accounts for between 5 to 7% of GDP. However, there is a disparity in company growth across regions, with GAFA and Microsoft dominating the market. Companies in the East Asia and Pacific (EAP) region have experienced slower growth, possibly due to a lack of knowledge and skilled workers. To address these challenges, discussions and partnerships involving multiple stakeholders are necessary. Daisuke Hayashi advocates for a diversified approach that involves both the public and private sectors in enhancing digital skills. Efforts are being made to address the digital skills gap, particularly among the younger generation, and support the development of skilled individuals in local areas. There is a shift in focus from cybersecurity to the development of digitally skilled individuals. It is necessary to improve digital skills and foster digitalization through public-private collaboration. International exchange is also encouraged to drive innovation in the digital economy. Overall, it is essential to improve digital skills and ensure equitable growth in the digital economy.

Rika Tsunoda

During the G7 Hiroshima Summit 2021, Japan emphasized its key focus areas in digital infrastructure and capacity building in developing countries. One of the main areas of focus is the need to bolster security and resilience in digital infrastructure. Japan recognises the importance of having a secure and robust digital infrastructure to support economic growth and development. To ensure supply chain resilience, Japan promotes the use of open 5G architecture and vendor diversification.

In addition to infrastructure, Japan is also working to address the knowledge gap in digital skills and literacy. The government of Japan offers capacity-building programmes in the digital field, aimed at improving digital skills and literacy in developing countries. Examples of these programmes include the ASEAN-Japan Cybersecurity Capacity Building Centre and cyber defence exercises. These initiatives are steps in the right direction towards improving digital skills and literacy, paving the way for greater digital inclusion.

Another area where Japan is actively promoting is the establishment of a 5G Open RAN architecture. The Open RAN approach promotes supply chain resilience and transparency, as well as encourages healthy competition. The Quad leaders have even announced cooperation with Palau to establish the deployment of Open RAN. Japan plans to hold a symposium on Open RAN through the ASEAN-Japan ICT Fund, further demonstrating its commitment to this technology.

The importance of variety in capacity-building programmes tailored to the needs of different countries is also emphasised by Japan. Each country has different requirements and needs when it comes to capacity building, and it is important to cater to those specific needs. Already, there are institutions and programmes such as APT (Asia-Pacific Telecommunity) and AJCCBC (ASEAN-Japan Cybersecurity Capacity Building Centre) providing capacity building. The Ministry of Internal Affairs and Communications (MIC) has been instrumental in helping Japanese telecom companies expand their ICT solution services overseas, catering to the diverse needs of different countries.

Furthermore, Japan aims to promote its ICT companies to share solutions for cross-border payment and digital ID with developing countries. Japanese ICT companies possess the technical abilities required to achieve secure cross-border payment and digital ID systems. The government of Japan sees the potential in using capacity-building initiatives to share these solutions and contribute to reducing inequalities and improving infrastructure in developing countries.

In conclusion, the G7 Hiroshima Summit 2021 highlighted Japan’s commitment to digital infrastructure and capacity building in developing countries. Japan aims to bolster security and resilience in digital infrastructure, address the knowledge gap in digital skills and literacy, and support the establishment of a 5G Open RAN architecture. The emphasis on variety in capacity-building programmes, as well as the promotion of Japanese ICT companies to share solutions for cross-border payment and digital ID, further demonstrates Japan’s dedication to fostering inclusive and sustainable development.

Session transcript

Daisuke Hayashi:
you you you you you you you you you you you you you you you you you you you you Thank you very much for coming to the session on enhancing the digital divide for the development of the digitalization in the region. My name is Daisuke Hayashi from the World Bank and I’m a moderator of this session and once again, thank you very much for coming. Firstly, I’d like to start with my presentation, a bit short, a brief, so that I would like to share with all of you what is the background and what we are going to discuss about. And today’s speaker is composed of five people from the government of Indonesia, ASEAN Secretariat and Japan. Those three speak online and the two tech specialists from JICA, Japan International Cooperation Agency, Mr. Yamanaka-san on your right, and from the Japanese commerce company Mercari, Mr. Amano-san will join us as well. And just to say the background of the digitalization, it’s not so very familiar with all of you, so just skip the details, but we all share that the digital economy has a lot of potentials and grows faster than the traditional economy and of course, they have generated considerable benefits, like 10% increase in internet adoption with 0.5 to 1.2% income growth, or the 1% increase in adoption of digital technology is associated with labor productivity growth of 1 to 2.0%. There are so many examples, but we can already know that kind of potentials. And also on your right side, the digital economy size is still modest, but increasing rapidly, now between 5 to 7% of GDP, and notably in the EAP areas like Indonesia, there’s a lot of increase in recent years. But at the same time, we are facing some challenges. Of course, not only the digital divide, but also the digital leaders and lack of digital leaders in the region. For example, in the United States, what we call the GAFA, plus Microsoft has a lot of dominating the market. And also the Asian Pacific region, you see that so many companies are growing, but compared to the United States, and the United States, it’s not so big. And of course, increasing the companies in the EAP region is so rapidly like, for example, the Tokopedia, Glove, or some sort of apps like the car ride sharing system or e-commerce services are growing, but it’s not so huge as Google or other giga tech companies. So I think we think that these kind of differences or lack of the growth of the companies are due to that lack of the knowledge, lack of the skilled people, or other things. So I’d like to deepen this discussion, deepen those kind of perspectives, so that how we could evolve the digital development in the region in the context of the digital skilled people. And this is a main point of the discussion, but of course, I’d be happy to elaborate other aspects as well. So I’d like to invite the ASIC, Dr. Ran, to, by, of course, introduction, and as well as the regional access, sharing the regional situation of digitalization, and what the challenges they are facing. So Ran-san, please.

Dr. Ran:
Thank you. Thank you very much. I hope that you can hear me okay.

Daisuke Hayashi:
Yes.

Dr. Ran:
Okay. All right. I very quickly to share on the screen, my presentations. First of all, I would like to thank the organizer, World Bank, and Hayashi-san for the opportunity for me and my colleague, Deanne, from the Indonesia, the chair of the ASEAN this year to update and share our perspective on how ASEAN moving forward the digital agenda. Now, first, I just want to quickly give you an overview that ASEAN is very much aware of immense potential of the single digital economy of the regions. ASEAN is now the fifth largest economy in the world with 3,000 billion US dollars, and with significant 300 million consumers across the region. Now, the pandemic given a lot of impact on the economy, but it’s actually a good accelerator for the digital transformation in ASEAN. And as you may see on the screen, there are more than 400 million people, actually 460 million people are very much connected to the internet. And over the last two years, there are more than 100 new internet users actually joined the internet setting. So having said that, ASEAN is ready to embrace the digital transformation as a new driver for growth. And as you may be aware that last year, ASEAN has launched the ASEAN Digital Economy Framework Agreement, or in short, DEFA. And it estimated that the DEFA agreement would further accelerate the digital transformation of ASEAN. And it estimated that the DEFA would contribute up to 2 trillion US dollars to the regional GDP. So the DEFA mostly shed a new light for the ASEAN Digital Economy agenda in ASEAN. And there are comprehensive nine key elements of the future ASEAN DEFA that I would like to share with you like digital trade, e-commerce, digital ID and authentication, the online and cybersecurity, the digital payment, the cost borders, data flow, the competition policy and cooperation for the emerging topic and also the talent mobility. So now, but having said that, I must want to say that the journey of the ASEAN toward the digital transformation is not without challenging. In fact, we have a lot of challenge that we need to overcome in order to really move forward the digital economy agenda. First and foremost, that ASEAN is a collective, is a collection of a country with a very different level of readiness or digital readiness and level development. And while we are sharing the same objective, we acknowledge that within and among ASEAN, some of the ASEAN member countries are actually the top performers of the digital transformation, while other countries are very much at the lowest performance in the region. And you can see on the screen that there is a quite significant gap between the lowest and the highest and the top performance within ASEAN. in terms of trade, payment, digital payment, digital ID, cybersecurity. So that actually is an inherent disadvantage that ASEAN needs to overcome in order to realize the so-called ASEAN digital economy. Now, another challenge that ASEAN needs to deal with, which is quite significant, is about cybersecurity. And the gap remains in the establishment of a coherent legal framework for cybersecurity is very large. And the level of maturity among ASEAN is still very significant. The digital divide, not only between ASEAN member countries, but between or among the players within the ASEAN digital is also very large, between micro, small and medium size, and also a big company. So as you may see on the screen, the micro and small enterprise is far below in terms of digital readiness to adopt in order to fully benefit the digital transformation of ASEAN. And also, we acknowledge that there are emerging challenges with regard to the emerging technology like AI, like cloud computing technology. In fact, it creates a lot of impact on our labor market, on the bias and fairness of the market. So we need to address it to ensure that the environment, the digital environment is truly sound and favorable to everyone. Last but not least, it’s about the digital scale. And according to the estimation that about 10 to 20% of the job will be displaced by the digital technology in the next few years. So there are about 28.1 million jobs will be displaced by the digital technology. But at the same time, we are dealing with a lot of scarcity in terms of digital talent. So as you may see on the screen, that ASEAN needs about 50 million additional digital professionals in this area. And we need to address it sooner than later. So there are two challenges that we need to overcome. First, we make those who are unskilled to be more relevant. And we also need to train a higher professional to actually empower ASEAN to really benefit the digital transformation. Now, I just want to turn to my last point, that ASEAN is aware of our challenges. We know that we need to do a lot, particularly in terms of digital skill. And across the sector body of ASEAN, like ASEAN SME, ASEAN Science Technology, ASEAN Education, we all set up a facility to ensure that we open more opportunity for those who need to equip with digital knowledge. While at the same time, we want to support the education system and the science technology in ASEAN to really make ASEAN to be more adaptive to the new environment. So thank you very much. And Aseesan, I just want to stop here and open for further questions. Thank you very much.

Daisuke Hayashi:
Thank you, Ransom. Thank you for your comprehensive explanation. And of course, now we understand what the challenges we are facing as ASEAN whole. So I’d like to now invite the Indonesia Ms. Ransom, sorry, Ms. Ransom, for sharing with us your priority. Notably, you know, Indonesia has experiencing the presidency of the G20 and ASEAN last year and this year. So would you please share with us what are your priorities and what kind of background as well?

Dian:
Yes, thank you, Mr. Hayasi. So thank you also for the opportunity to speak in this prestigious panel discussion that you convened today. And as you say that this year, Indonesia is no longer the presidency of G20, but we are part of Troika countries in the Digital Economy Working Group or DWG 2023, as well as the chair of ASEAN in 2023. At the same time, we actively contribute to the BUD Forum and our ministry, the Ministry of Communication and Informatics of the Republic of Indonesia, or MCI-RI, involved and carried on priority deliverables in BUD Forum. As Troika in DWG, Indonesia supports the three priorities of DWG 2023 proposed by India’s presidency. They are digital infrastructure for digital inclusion and innovation, building safety, security, resilience, and trust in digital economy, and digital skilling for building a global future-ready workforce. The last priority, the digital skilling for building a global future-ready workforce aims to enhance the G20 members’ collective efforts to promote digital skill and digital literacy with a focus on addressing digital divide and skill gaps, including gender skill gaps, true skilling, reskilling, and upskilling, and other capacity-building initiatives. This digital skill initiative is a continuous effort made by China in 2016, Germany in 2017, Argentina in 2018, and Indonesia’s G20 presidencies in 2022. During Indonesia’s DWG presidency 2022, we emphasized on digital skills and launched three output documents. We launched Compendium of Framework of Practices and Policies on Advanced Digital Skills and Digital Literacy, and then G20 Toolkit for Measuring Digital Skills and Digital Literacy, together with a collection of policies and recommendations to improve meaningful participation of people in vulnerable situations in the digital economy. Moreover, as the chair of ASEAN 2023 MCI Indonesia overseeing the digital sector under ASEAN Economic Community Pillar and information sector under ASEAN Sociocultural Community Pillar, therefore, we propose three priority deliverables. Under the digital sector, we propose two priority deliverables. The first one is regulatory pilot space to facilitate cross-border digital data flow self-driving car in ASEAN, or we call it ASEAN RPS. The ASEAN RPS is supported by government of Japan as one of the dialogue partner of ASEAN. And the second one is ASEAN Framework on Logistics for Digital Economy Supply Chain Indonesia Presidency during G20, especially in DWG in 2023 tool. And during the ASEAN chairmanship in 2023, we demonstrated a strong commitment to advancing digital skill and technological development in the region. And under its leadership, Indonesia aim to harness the power of digital transformation as a driver of economic growth and social development. The Indonesia government implemented various initiatives to foster digital literacy, innovation, and entrepreneurship. We actively promoted digital skill development to educational program, vocational training, and partnership with the public and private sector. And then by prioritizing the digital skill, Indonesia sought to bridge the digital divide and ensure that its people as well as the broader Southeast ASEAN community are prepared to thrive in the digital era, contributing to economic prosperity and social inclusion on a global scale. Thank you, Mr. Ayase.

Daisuke Hayashi:
Thank you very much, Rian-san, for your very continuous work from the G20 to ASEAN and to concretize and of course deliver some more projects in the region. So now let’s go and move to a bit different perspective because I recognize that this year is Japan and ASEAN has 50 years friendship year and I think that Japan has been contributed to the region as well. So I’d like to invite Mr. Tsunoda from the Ministry of Communications of Japan for sharing with us some of your priorities and your contribution to the region. Thank you.

Rika Tsunoda:
First of all, thank you for the question. I’m Rika Tsunoda from the Ministry of Internal Affairs and Communication. I work mostly on promoting digital infrastructure globally in International Digital Infrastructure Promotion Division. So this session is very relevant. So I’m so delighted to join this session. And as Hayashi-san mentioned, Japan hosted the G7 this year as a presidency. And I’d like to today share the priority of digital development with developing countries. And I think the first priority of Japan is to ensure economic resilience and economic security, including in developing countries in Indo-Pacific region. So to that end, promoting to deploy open and resilient digital infrastructure in developing countries is indispensable. So first, I’d like to reflect on the G7 in Hiroshima Summit, which Japan hosted as a presidency. And I’d like to focus on free and open Indo-Pacific and economic resilience and economic security part. And in Hiroshima Summit, G7 member states reiterated the importance of a free and open Indo-Pacific and underscored its commitment to strengthen coordination with regional partners, including ASEAN and its member states, and also reaffirmed the partnership with Pacific Island countries. And when we look at G7 leaders’ statement on economic resilience and economic security in building resilient critical infrastructure part, it emphasized the importance of cooperating on enhancing security and resiliency in critical infrastructure in digital domain. And also we welcome the supplier diversification efforts on 5G open RAN architecture. So from that point, the Japanese government has been promoting open 5G architecture and vendor diversification in radio access network through multiple approaches, including intergovernmental dialogue and government and industry collaboration, including capacity building. So open RAN is a radio access network that encouraged multiple vendors to share 5G networks through open interface. And as you know, currently 5G serves as a basic infrastructure for our social and economic activities. So 5G should be developed in secure, open, and robust ways through competition by multiple vendors. And 5G open RAN architecture have some benefits like ensuring supply chain resilience by making telecom operators procurement more flexible and helping increase network transparency and also promoting competition of base station market. So with regard to digital development with developing countries in 5G open RAN, in Quad leaders’ meeting this year, that Quad leaders’ statement announced cooperation with Palau, one of the Pacific Island countries, to establish deployment of open RAN so that regional countries are not left behind as telecommunication markets. And also there are government industry engagement cases to enhance knowledge of 5G open RAN. For example, USAID launched Asia Open RAN Academy in Philippines, which is the human resource development program targeting in the Pacific region to enhance knowledge of open RAN. And we call it AURA, and AURA has partnership with government, business, and civil society, including Japanese telecom companies. And so the MOIC is also willing to cooperate with AURA, with USAID. And next example is that this year marks 50th year of ASEAN-Japan friendship and cooperation, and Japan will be holding a symposium on open RAN on November 1st and 2nd by using the ASEAN-Japan ICT Fund, which was established by Japanese contributions. In the symposium, we are going to hold panels to share private companies’ open RAN promotion efforts and digitalization by 5G networks. And also, second priority is that I’d like to highlight that filling the knowledge gap in digital skills and digital literacy as well as developing network is also a high priority. And because internet is borderless, so one country is having high technologies, knowledge isn’t enough to achieve digital inclusion in the free and open in the Pacific. And to fill the knowledge gap, the government of Japan has been enhancing capacities in digital field, not only 5G area, as I mentioned, but also in other telecom areas. So I’d like to share the Japanese government activities. I’d like to quickly share my slide. Here we go. I hope you see my slide as well. So this is the capacity building programs to API. And the fund provides extroversial contribution to API to support ICT training and international collaborative research and private project. And ICT training programs are targeted for officials from APT member states and provides lectures by experts and have discussions on broadband and cybersecurity. And also, there are official visits to ICT companies. For example, in 2022, the Japan Telecommunications Engineering and Concerting Service, JTEC, provided a training program in FIZI for monitoring natural disasters and climate change by wireless network. And also, there are international collaborative researches and projects, including internet quality assurance and security, and also development of an app. And in terms of cybersecurity, Japanese NICT, National Cybersecurity Training Center, develops training programs to deal with cyber attacks, such as cyber defense exercise. And the government, the Japanese government has been working on human resource development in cybersecurity field in cooperation with private operators in Asia-Pacific region. For example, this is ASEAN-Japan Cybersecurity Capacity Building Center, and we call it AJCCBC. And this, it was established based on the agreement of seven tiers ASEAN and Japan telecommunications and information technology ministers meeting in 2017, and it started its operation in Bangkok in September 2018. AJCCBC has helped uplift cybersecurity capabilities in ASEAN region. So the main activity includes conducting multiple cybersecurity exercises with private sectors, including CIDR, cyber defense exercise recurrence, and also digital forensic exercise and malware analysis exercise. And also, the AJCCBC holds annual competition of cyber tech called ASEAN Youth Cybersecurity Technical Challenge, we call it Cyber SEAGAME, to promote the ability of young engineers and students selected from ASEAN member states. And as of April 2023, AJCCBC holds approximately 1,200 participants in total and contribute to improving cybersecurity capabilities in the Asia-Pacific region. So I’ll stop here, and thank you for listening.

Daisuke Hayashi:
Hi, Tono-san. Thank you very much for your explanation. Very, very fruitful and, of course, comprehensive in terms of Japanese contribution. And now, I’d like to invite Jaica’s Yamanaka-san to ask you about some your contribution, because Jaica has been contributing a lot, not only supporting the infrastructure building, but also the capacity building as well. So I’d like to ask you what you brought to the region, and also, I think you have a lot of lessons learned from your experiences, so in terms of the digital scaling up project. So I’d like to ask you for these questions.

Yamanaka San:
Thank you so much, Hayashi-san, and thank you so much for World Bank to organize this session. It’s very, very interesting for us as well, because ASEAN, or Asia-Pacific region, is actually a priority country for us as well. In terms of actually, just before I go, I think just to say who we are, I think it’s very important to talk about. We are a bilateral organization, so donor organizations from Japan, Japanese government. We currently, last fiscal year, we had about 1,700 ongoing projects, with one point. It’s very difficult to say because Japanese yen is weakening, but fiscal year in 2022, we had about 1.2 trillion, 1.3 trillion dollars of funding for the projects. And then last year, we actually had 13,217 actually trainees actually trained. So in that respect, as Hayashi-san mentioned, we’re doing a lot in terms of capacity building areas as well. And we also have like 9,163 experts and JICA volunteers fielded all over the world. So in that also respect that we actually are working in more than 150 countries, and ASEAN, and as well as Asia-Pacific regions. Pacific region is also the region where we work very closely. So in terms of actually infrastructure development, we have been doing quite a lot in the past. But currently, in the digital technology per se, we are not really doing a lot in the Asia-Pacific regions. Rather, we actually use it utilizing the infrastructure that we have contributed to support, and then trying to actually have the digital components into it. Except maybe we are doing projects such as like geospatial electrical reference points that we’re doing this project in Thailand. That’s a very important point where it’s going to actually help, for example, mobile-based services. Because reference point is very important for the mobility service to use, such as like automated cars and so on, or even like automated combines where they can actually do the fieldwork automatically. So we’re doing a lot of POCs based on these reference points that we’re helping them. Apart from that, we have supported quite many actual infrastructure development in like water, roads, and other areas, and where we see a lot of actually digital technology needs. So we’re actually trying to incorporate a lot of digital technologies, as well as incorporating how can you use the data to support these initiatives as well. So that’s what we’re doing. And then the previous panels, actually, we have mentioned quite a lot in terms of cyber capacity buildings. We actually, and then I think Director Tsunoda was mentioning about AJCCBC, in the Asia-Japan Cyber Security Capacity Building Centers. We are actually utilizing centers, cooperating with the centers to provide different actually capacity building initiatives in ASEAN regions. And then we are trying to actually expand that centers to be a center of excellence on creating the cybersecurity human resources. So that is something that we’re doing very closely, as well as other sort of technical assistance projects or initiatives in ASEAN, specifically in ASEAN regions, in terms of cyber securities. If you don’t stop me, I’m going to continue. So in terms of actually a significant, for example, Pacific Island initiatives, we are currently in talks with our partners like United States and Australia to actually have fiber lines actually to be built to the NUI. It’s another sort of alternative route, routings for the critical sort of fiber backbone. So that’s what we are doing. So in short, not short, I guess, we have been doing a lot in terms of capacity buildings right now, specifically utilizing a lot of support that we have done in the infrastructures in ASEAN

Daisuke Hayashi:
regions. Okay. Thank you. Thank you very much for sharing your experiences. And now I’d like to invite Mr. Amano from the Mercari company to share with different perspectives of the digital skills. And because I think that kind of in Japan and also the ASEAN’s contribution is not mainly, but principally on the kind of public sector’s point of view. So I think not only the public sector, but also private sectors, the scaling up is so important to be innovative in the region or kind of globally speaking. So I’d like to ask you, Amano-san, to share your experiences and how you contributed to the kind of, I think Mercari has a lot of experience to support the scaling up to digital skilled people in the local areas in Japan. So I’d like to ask you about that.

Mr. Amano:
Thank you. Just thank Hayashi-san for the invitation to this invaluable opportunity and the other participants for the fruitful discussion and the information sharing. I’m from the private sector. I mean, I’m working for Mercari Inc. We, Mercari, provide the smartphone application named Mercari, the same to company name, which is a peer-to-peer trade platform of the second-hand items and handmade items. By reducing the disposal of items by individual trades, we aim to accelerate the circular economy into the society with our apps. Now, we, Mercari, have broadened our business, not only Japan, toward the U.S., and ready to promote the further expansion in the world. In Japan, we have more than 20 million customers, reaching 100 billion yen in the GMV last year. I mean, growth market value. Now, we, Mercari, support the local students by giving a project-based learning program and donation to a local technological college, Kamiyama Kousen, using our Mercari app. Through our PBR program, high school students sell local products on our Mercari site, which is very difficult for undigitized local companies or traditional industry. We had a program in Wakayama, Kyoto, and so on. There, we teach digital skills, as well as digital marketing skills, including data analysis to the local students. And based on needs, they sell the local food and traditional products in Kyoto in the real easy business. They are expected to acquire useful digital knowledge and experience in the real situation and the ability to survive and commerce in the difficult era. On the other hand, the Kamiyama Tech College is an educational institution which gives IT education utilizing regional features, such as agriculture and so on. Mercari gives the support in the form of donation and cooperation agreements. In particular, as the majority of Japanese engineers are male, we aim to increase the number of female engineers in the school. And in that, we give the workshop for engineers and local people in the Kamiyama Kousen. And in this way, for the growth of engineers in the countryside, it is very useful to utilize the local resources, which facilitates the students utilize what they learned in the school in the real situation. But the lesson that we learned from the kind of educational activities is a lack of human resources, which can give technological and real experience in the local area. It is quite difficult. I mean that through internet, you can learn digital skills and how to use data analysis on YouTube or etc. But the most important feature for younger generation is the kind of situations or kind of communication with real people. So by dispatching specialist engineers from Mercari to kind of local schools and local companies, we try to kind of not educate but deepen the understanding of digital skills in the region.

Daisuke Hayashi:
Thank you. Thank you very much, Amano-san, for your sharing your very different experiences that we had and a kind of public sector’s point of view. And I think, you know, from the Yamanaka-san’s point, the indication and Amano-san’s indication that, you know, that we are focusing mainly, we have been focused on mainly on the cyber security, you know, capacity building, and of course Japan supported for a long time for this cyber security matters. But at the same time, we are not ignoring but, you know, we don’t know how, what to do with kind of more digital skilled people to be innovative, right? And cyber security is a point of more, you know, protective way from the attack, cyber attacks or something like that. But I think the digital skill, you know, creation of the digital skilled people is a more positive, to me should be more positive way to do that. So, I think, you know, from the Lan-san’s presentation that he, I think Lan-san, you made a presentation on, you know, some sort of, you know, the sectors that you are going to do or kind of ASEAN is now focusing on. So, would you please share with your experience a little bit more? What is the sectors do you think that most important or kind of more important to be skilled up? And also, how to, what is the best way to create the ways for the creating the people? And this is the question for you.

Dr. Ran:
Thank you very much for the opportunity. And now in ASEAN, we very much focus on the inclusive participations of the players in the digital economy. So, the small, the micro, So, the small, the micro, small and medium size, the household deserve to be equipped with sufficient digital knowledge in order for them to actively and effectively participate in the future digital economy of ASEAN. And that is our top priority for the digital era. And I’m thankful for a lot of support, including those in the private sector and in the donor, including Japan. ASEAN has established a very proactive facility. We call it ASEAN SME Academy, which is an online platform to enable everyone, especially those who are small and medium size and participate and enjoy the upskilling, reskilling program. And the second thing that we also encourage the business to visit matching so that we can encourage more small and medium size household to participate in the future or in the digital economy of ASEAN. So, that is the first part of it. The second priority that ASEAN is very much focused is we want to promote the cross-border, the cross-border e-commerce throughout the region. And that also enables a lot of good initiatives, like how to promote the e-commerce transition by having a better logistic system across the region. And the second, we need to develop the so-called digital payment system throughout the region. And last year, we have secured a good agreement among the member states of ASEAN to enhance the so-called digital payment network using the QR code. So, there are six member countries of ASEAN already a part of that network. And we want to do more so that the banking sector, the banking sector can actually collaborate with each other to enable a cross-border digital payment. And we are going to develop a new system where a new system for the digital ID or digital business ID to enable the interoperable platform for business and consumer who actually can confidently participate in a digital environment of ASEAN. Now, on top of that, I believe that ASEAN has a lot of things to do in order to train our people and to train unskilled worker to become more skilled in the digital environment. And then a lot of things we need to do, and we are happy to work closely with our donor, a partner, World Bank, Japan, and then in order to address the issue or the challenge in the digital scale in the years to come. So for that, I stop here.

Daisuke Hayashi:
Thank you very much. I thank you, Lancel, for your thoughts and very, very comprehensive thoughts. Yes, I think the more and more the digitalization expanding that kind of more and more cross-sectoral approach is very important, like a banking system, payment system, all the systems are to be composed of the many aspects, not only cybersecurity, but also the how to involve the banking sectors and also the taxation, et cetera, et cetera. So this is very important that the many actors or should be involved for creating the one system, which means that the many aspects of the knowledge should be integrated to create one or two systems. So I think this kind of the integration of the knowledge is key for the foundation of the next years, the innovation, digital innovation. So let’s go forward to the sector, the further for the futures, perspectives, and I’d like to invite Dian-san to share with us what is the role expected to the public sector. I mean, the ASEAN’s view or the, of course, not only the regional organization, but also like the World Bank or other international or the multinational entities can do for deepen this kind of knowledge sharing system.

Dian:
Okay, thank you, Mr. Hayasi for the question. So I would like to address it in the role that Indonesia expect from the public sector and other multinational entities. Dian-san, I would like to focus only on the digital and information sector that’s because that’s my area scope of work. So I will focus to answer your question on that area. In the age where digital information and technology are transforming societies, economies, and government, Indonesia, like many other nation, has high expectation for collaboration and support in this reality, in this new reality for the development of technology and its rapid growth. So Indonesia with its growing population and emerging economy is believed to harness the potential of digital and information sector for the betterment of its society. However, to fully realize the opportunities and mitigate the challenges, the country looks to the public sector and multinational entities for cooperation and collaboration. Of course, we are a member of many international organizations such as International Telecommunication Union and then ASEAN, and we actively engage with ASEAN and also the other dialogues partners, which also includes Japan, of course, in it. And we are actively engaging G20 and other multilateral and international fora. But first and foremost, for Indonesia, expect cooperation is in building a robust digital infrastructure. The reliable and high-speed internet connectivity is the backbone of any digital information. I think this is a promise of digital information, digital transformation. And then for the public sector, in collaboration with multinational entities, can support Indonesia expand and improve its digital infrastructure, ensuring that even remote area have access to the internet. This is not only support economic growth, but also empower citizen with valuable information and resources. Indonesia also anticipates support in nurturing its burgeoning technology and startup ecosystem. The digital sector presents an opportunity to foster innovation, create jobs, and drive economic growth. Collaboration with international technology companies and investor can help Indonesia startups gain access to capital, mentorship, and global markets, accelerating their development and contribution to the economy. Moreover, in e-governance, Indonesia expect public sector cooperation to play a pivotal role in digitizing government services. This can streamline administrative processes, reduce bureaucracy, and enhance transparency, making it easier for citizens to access essential services. A multinational entities can also provide expertise and technology to support the digitalization of government function. Next is about data privacy and cybersecurity. Both are critical concern in the digital age. So Indonesia has several cooperation with public sector and international organization in establishing robust data protection law and cybersecurity measure. Collaboration is essential to guard against cyber threat, safeguard sensitive information, and build trust in digital ecosystem. Additionally, Indonesia expects support in the education sector to equip its citizens with digital skills and literacy. In a world driven by technology, digital education is crucial to prepare the workforce in tomorrow’s. Public-private partnership can facilitate program that train individuals in digital skill, ensuring that there are no one left behind in this fast-paced digital revolution. Inclusivity is another vital aspect of Indonesia expectation from the public sector and multinational entities. The nation seeks to bridge the digital divide, ensuring that all citizens, regardless of their location and socioeconomic status, can benefit from the digital age. This may involve subsidizing access to digital services and ensuring that the content is available in local languages. As with many sector, Indonesia expects cooperation that respects its national sovereignty and values. We are the country with a very diverse cultural entities, so we should acknowledge and preserve the digital space. In conclusion, Indonesia recognizes the transformative power of digital and information sector. The nation looks to the public sector, including its own government, as well as multinational entities, to collaborate in building a digital infrastructure, nurturing innovation, improving government services, protecting data, fostering education, promoting inclusivity, and respecting cultural diversity. We can harness the potential of digital age, which benefit to all, not only Indonesian, but also contribute to the nation and regional growth and development. Thank you, Mr. Hayase.

Daisuke Hayashi:
Okay, thank you. Thank you very much. Okay, just, okay, we have five minutes left, so just briefly asking the Japan and JICA for just from your views on the how to promote these, you know, to promote digitalization through the, you know, scaling up the digital, the digitally skilled people, right? Increase the digital skilled people. So maybe Japan, MIC first, just briefly, please.

Rika Tsunoda:
Just briefly with my comments. I think building on what Hayase-san said and Dr. Ran said, I think although there are already institutions and programs that have been providing capacity building, such as APT, Asia Pacific Territory Community, and the AJCCBC, and I believe it’s meaningful to provide variety of programs by multiple channels, by multiple organizations, including World Bank is really meaningful because each country has different needs and we should meet their needs as much as we could. And with regard to the contents of the capacity building, as Hayase-san said, I think not only in cybersecurity, but also ICT solutions and utilization services are important areas to focus on, as Amano-san shared, Merikari in this session. And the MIC has been helping Japanese telecom companies, which provides digital services to solve global issues, such as climate change and inequality and expand its ICT solution services to overseas by using its budget. And as Dr. Ran said, there that nature focus, I think our focus on the cross-border payment and digital ID, there are also the Japanese ICT companies that have such kind of techniques, technicals like cross-border payment and digital ID. So through these capacity buildings, and I think the government of Japan could share these ICT solutions to developing countries, in addition with digital infrastructure and developing countries could get to know these solutions. These are my points. Thank you.

Yamanaka San:
Thank you. And then Yamanaka-san, please. One minute, right? Yes. So thank you so much, actually. So building onto Tsunoda-san’s actually comment, I think partnership with private sectors, I think that’s gonna be critical, I think. Because they actually have the technological solutions, so they know it. So how can you actually support the connections between the companies like Inazians with the Japanese company? I think that’s also gonna be very, very critical. And in terms of content, not only in the digital skills side, but I think the policy is very important. So how can you actually have the countries actually have the right policies to support and to foster innovations and the ecosystem development as well? So in that respect, we also support the digital policies, actually skills trainings as well. So we need to actually have this double-prone approach, not only the technology skills important, but the content or the policy areas, as well as digital skills in terms of technology skills connecting with the private sectors.

Daisuke Hayashi:
Thank you. Thank you very much for your comprehensive and very conclusive comments. Just finally, I invite Amano-san from your private sector’s point of view. I mean, as Yamanaka-san mentioned, the collaboration with the private sector is so important. And becoming more and more important from the now on to the innovation. So please say some conclusional words from you.

Mr. Amano:
Well, it’s quite difficult to say conclusional words, but I hope that in order to implement digital skills to local or developing countries people, it is quite important not only educate digital skills, but also that communication with kind of top tier engineers with local people. From this viewpoint, and the promotion of movement people from the urban side to local side is very important by the government, but also lowering the barrier of the cross-border movement of engineers are quite also very, very important because including Japan, even in the developed country, we have a scarce of engineers and we also need the top tier engineer even in the urban area, but also the local area. So we hope that the government try to lowering the barrier kind of taxation or visa issuing. So for doing that, the digital skilled people will kind of disperse over the world. I think that this is quite important for implementing digital skills into the people.

Daisuke Hayashi:
Thank you. Thank you very much, Amano-san. Yeah, I think from this session, I believe that we understand what we are facing, the challenges we are facing, but at the same time, we had a lot of experiences on some sectors like cybersecurity and infrastructure technologies, et cetera, et cetera. So from now on, I think it’s very important to kind of improve or develop or foster the digitalization by creating a more innovative people, which means that a lot of skills and of course, this is very impressed by other sectors expertise and also the mixture of this culture. So I think it’s kind of private and public-private collaboration is very important. So I think it’s very important to create some sort of discussion place or the venue for exchanging knowledge or the exchange point for further scaling up the digital skills from the region to the world. And that might be bringing more exchange globally and not only the regionally or country level, but also the global exchange. And that would lead to the more further the innovation, which leads to the AI or AI innovation that we are discussing mainly in this IGF 2023. So I’m sorry, but it’s run on time. And so I’m sorry that I cannot take the question from the floors, but thank you very much for your cooperation. And I hope that, of course, from the World Bank side, I’d be happy to work with all of you, our clients to develop the digitalization in the world. And of course, with the Japanese, as well as the ASEAN and private sectors collaboration. So thank you very much for your participation. Thank you.

Dian:
Thank you, Mr. Hayashi.

Audience:
May I leave the forum? Thank you. Thank you very much. Okay. Bye-bye. Thank you. Bye-bye. Bye-bye.

Audience

Speech speed

179 words per minute

Speech length

25 words

Speech time

8 secs

Daisuke Hayashi

Speech speed

121 words per minute

Speech length

2006 words

Speech time

996 secs

Dian

Speech speed

122 words per minute

Speech length

1364 words

Speech time

668 secs

Dr. Ran

Speech speed

123 words per minute

Speech length

1483 words

Speech time

722 secs

Mr. Amano

Speech speed

145 words per minute

Speech length

686 words

Speech time

285 secs

Rika Tsunoda

Speech speed

149 words per minute

Speech length

1340 words

Speech time

540 secs

Yamanaka San

Speech speed

159 words per minute

Speech length

853 words

Speech time

323 secs

Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Martin Boteman

The discussion delves into the significance of identity in the realm of security and the crucial role that AI can play in safeguarding identities. It is acknowledged that with the advancement of AI, data has become more frequently personally identifiable than ever before, leading to a need to address the complex relationship between identity and privacy.

One argument put forward is that security will require identity. The increasing personal identifiability of data, facilitated by AI, has made it imperative to establish and protect individual identities for the sake of security. This argument highlights the evolving nature of security in the digital age and the need to adapt to these changes.

On the other hand, a positive stance is taken towards the potential of AI in enhancing security with the identity factor. It is suggested that AI can aid in securing identities by leveraging its capabilities. The specifics of how AI can contribute to this aspect are not explicitly mentioned, but it is implied that AI can play a role in ensuring the authenticity and integrity of identities.

Furthermore, the discussion recognises the necessity to address the dichotomy between identity and privacy. While identity is essential for security purposes, safeguarding privacy is equally important. This creates a challenge in finding a balance between the two. The analysis raises the question of how to deal with this dichotomy in future endeavours, emphasizing the need for a thoughtful and nuanced approach.

Legal measures are acknowledged as an important consideration in the context of AI. However, it is argued that relying solely on legal frameworks is not enough. This underlines the complexity of regulating AI and the urgent need for additional measures to ensure the responsible and ethical use of the technology. The mention of the Algorithmic Accountability Act in the USA and the European Union’s AI Act serves to highlight the efforts being made to address these concerns.

Overall, there is a positive sentiment regarding the potential of AI in enhancing security with the identity factor. The discussion reinforces the significance of ethical principles such as security by design and privacy by design when implementing AI solutions. It asserts that taking responsibility for AI and incorporating these principles into its development and deployment is essential.

It is worth noting that the expanded summary provides a comprehensive overview of the main points discussed. However, more specific evidence or examples supporting these arguments could have further strengthened the analysis. Nonetheless, the analysis highlights the intersection of identity, privacy, AI, and security and emphasizes the need for responsible and balanced approaches in this rapidly evolving landscape.

Amal El Fallah Seghrouchini

Artificial Intelligence (AI) has emerged as a powerful tool in the field of cybersecurity, with the potential to enhance and transform existing systems. By leveraging AI, common cybersecurity tasks can be automated, allowing for faster and more efficient detection and response to threats. AI can also analyze and identify potential threats in large datasets, enabling cybersecurity professionals to stay one step ahead of cybercriminals.

The importance of AI in cybersecurity is further highlighted by its recognition as a national security priority. Organizations such as the National Science Foundation (NSF), National Science and Technology Council (NSTC), and National Aeronautics and Space Administration (NASA) have emphasized the significance of AI in maintaining the security of nations. This recognition demonstrates the growing global awareness of the role that AI can play in safeguarding critical infrastructure and sensitive data.

However, the use of AI in cybersecurity also raises concerns about the vulnerability of AI systems. Adversarial machine learning techniques can be deployed to attack AI systems, potentially compromising their effectiveness. It is crucial to regulate the use of AI in cybersecurity to mitigate these vulnerabilities and ensure the reliability and security of these systems.

Furthermore, AI is not only a tool for defending against cyber threats but can also be used to create new kinds of attacks. For example, AI-powered systems can be utilized for phishing, cyber extortion, and automated interactive attacks. The potential for AI to be used maliciously highlights the need for robust ethical and regulatory considerations in the development and deployment of AI systems in the cybersecurity domain.

Ethical and regulatory considerations are necessary to strike a balance between the power of AI and human control. Complete delegation of control to AI in cybersecurity is not recommended, as human oversight and decision-making are essential. Frameworks should be established to ensure the ethical use of AI and to address concerns related to privacy, data governance, and individual rights.

Initiatives aimed at differentiating between identifier and identity are being pursued to strengthen security and privacy measures. By avoiding the use of a unique identifier for individuals and instead associating sectorial identifiers with identity through trusted third-party certification, the risk of data breaches and unauthorized access is reduced.

In addition to data protection, ethics in AI extend to considerations of dignity and human rights. It is essential to incorporate these ethical principles into the design and implementation of AI systems. Furthermore, informed consent and user awareness are crucial in ensuring that individuals understand the implications and potential risks associated with using generative AI systems.

Preserving dignity and human rights should be a priority in all systems, including those powered by AI. This encompasses a continuous debate and discussion in which the principles of ethics play a central role. Educating the population and working towards informed consent are important steps in achieving a balance between the benefits and potential harms of AI.

Accountability, privacy, and data protection are recognized as tools towards ensuring ethical practices. These principles should be integrated into the development and deployment of AI systems to safeguard individual rights and maintain public trust.

Overall, AI has the potential to revolutionize cybersecurity, but its implementation requires careful consideration of ethical, regulatory, and privacy concerns. While AI can enhance and transform the field of cybersecurity, there is a need for comprehensive regulation to address vulnerabilities. The differentiation between identifier and identity, as well as the emphasis on dignity and human rights, are important factors to consider in deploying AI systems. Promoting informed consent, user awareness, and ethical use of AI should be prioritized to maintain a secure and trustworthy digital environment.

Audience

During the discussion, the speakers delved into the implementation of ethical AI in the field of cybersecurity and raised concerns regarding its potential disadvantages when countering unethical adversarial AI. They emphasised that adversaries employing adversarial AI techniques are unlikely to consider ethical principles and may operate without any regard for the consequences of their actions.

The audience expressed apprehension about the practicality and effectiveness of using ethical AI in defending against unethical adversarial AI. They questioned whether the application of ethical AI would provide a sufficient response to the increasingly sophisticated and malicious tactics employed by adversaries. It was noted that engaging in responsive actions by deploying ethical AI to counter unethical adversarial AI might place defenders at a disadvantage, highlighting the complexity of the issue.

Given these concerns, the need for a thorough review of the application of ethical AI in response to unethical adversarial AI was acknowledged. There was specific emphasis on active cyber defence, which involves proactive measures to prevent cyber attacks and mitigate potential harm. The aim of the review is to ensure that the use of ethical AI is optimised and effectively aligned with the challenges posed by unethical adversarial AI.

These discussions revolved around the topics of Ethical AI, Adversarial AI, Cybersecurity, and Active Cyber Defence, all of which are highly relevant in today’s digital landscape. The concerns raised during the discussion reflect the ongoing tension between the desire to uphold ethical principles and the practical challenges faced when countering adversaries who disregard those principles.

Furthermore, this discussion aligns with the Sustainable Development Goals (SDGs) 9 and 16, which emphasise the importance of creating resilient infrastructure, fostering innovation, promoting peaceful and inclusive societies, and ensuring access to justice for all. By addressing the ethical challenges associated with adversarial AI in cybersecurity, efforts can be made towards achieving these SDGs, as they are integral to building a secure and just digital environment.

Overall, the discussion underscored the need for careful consideration and evaluation of the application of ethical AI in response to unethical adversarial AI. Balancing the ethical dimension with the practical requirements of countering adversaries in the ever-evolving digital landscape is a complex task that warrants ongoing discussion and analysis.

Anastasiya Kozakova

Artificial Intelligence (AI) plays a crucial role in enhancing cybersecurity by improving threat detection and intelligence gathering. However, concerns have been raised regarding the autonomous nature of AI and its potential to make impactful decisions in everyday life. It is argued that AI should not operate solely autonomously, highlighting the importance of human oversight in guiding AI’s decision-making processes.

A major issue faced in the field of AI is the anticipation of conflicting AI regulations being established by major markets, including the EU, US, and China. This potential fragmentation in regulations raises concerns about the limitations and hindered benefits of AI. It is important to have uniform regulations that promote the widespread use and opportunities of AI for different communities.

The challenge of defining AI universally is another issue faced by legislators. With AI evolving rapidly, it becomes increasingly difficult to encompass all technological advancements within rigid legal frameworks. Instead, the focus should be on regulating the outcomes and expectations of AI, rather than the technology itself. This flexible and outcome-driven approach allows for adaptable regulations that keep up with the dynamic nature of AI development.

In the realm of cybersecurity, the question arises of whether organizations should have the right to “hack back” in response to attacks. Most governments and industries agree that organizations should not have this right, as it can lead to escalating cyber conflicts. Instead, it is recommended that law enforcement agencies with the appropriate mandate step in and investigate cyberattacks.

The challenges faced in cyberspace are becoming increasingly sophisticated, requiring both technical and policy solutions. Addressing cyber threats necessitates identifying the nature of the threat, whether it is cyber espionage, an Advanced Persistent Threat (APT), or a complex Distributed Denial of Service (DDoS) attack. Hence, integrated approaches involving both technical expertise and policy frameworks are essential to effectively combat cyber threats.

Ethical behavior is emphasized in the field of cybersecurity. It is crucial for good actors to abide by international and national laws, even in their reactions to unethical actions. Reacting unethically to protect oneself can compromise overall security and stability. Therefore, ethical guidelines and considerations must guide actions in the cybersecurity realm.

The solution to addressing cybersecurity concerns lies in creativity and enhanced cooperation. Developing new types of response strategies and increasing collaboration between communities, vendors, and governments are vital. While international and national laws provide a foundation, innovative approaches and thinking must be utilized to develop effective responses to emerging cyber threats.

Regulations play an important role in addressing AI challenges, but they are not the sole solution. The industry can also make significant strides in enhancing AI ethics, governance, and transparency without solely relying on policymakers and regulators. Therefore, a balanced approach that combines effective regulations with industry initiatives is necessary.

Increased transparency in software and AI-based solution composition is supported. The initiative of a “software bill of materials” is seen as a positive step towards understanding the composition of software, similar to knowing the ingredients of a cake. Documenting data sources, collection methods, and processing techniques promotes responsible consumption and production.

In conclusion, AI has a significant impact on cybersecurity, but it should not operate exclusively autonomously. Addressing challenges such as conflicting regulations, defining AI, the right to “hack back,” and increasing sophistication of cyber threats requires a multidimensional approach that encompasses technical expertise, policy frameworks, ethical considerations, creativity, and enhanced cooperation. Effective regulations, industry initiatives, and transparency in software composition all contribute to a more secure and stable cyberspace.

Noushin Shabab

Kaspersky, a leading cybersecurity company, has harnessed the power of artificial intelligence (AI) and machine learning to strengthen cybersecurity. They have integrated machine learning techniques into their products for an extended period, resulting in significant improvements.

Transparency is paramount when using AI in cybersecurity, according to Kaspersky. To achieve this, they have implemented a global transparency initiative and established transparency centers in various countries. These centers allow stakeholders and customers to access and review their product code, fostering trust and collaboration in the cybersecurity field.

While AI and machine learning have proven effective in cybersecurity, it is crucial to protect these systems from misuse. Attackers can manipulate machine learning outcomes, posing a significant threat. Safeguards and security measures must be implemented to ensure the integrity of AI and machine learning systems.

Kaspersky believes that effective cybersecurity requires a balance between AI and human control. While machine learning algorithms are adept at analyzing complex malware, human involvement is essential for informed decision-making and responding to evolving threats. Kaspersky combines human control with machine learning to ensure comprehensive cybersecurity practices.

Respecting user privacy is another vital consideration when incorporating AI in cybersecurity. Kaspersky has implemented measures such as pseudonymization, anonymization, data minimization, and personal identifier removal to protect user privacy. By prioritizing user privacy, Kaspersky provides secure and trustworthy solutions.

Collaboration and open dialogue are emphasized by Kaspersky in the AI-enabled cybersecurity domain. They advocate for collective efforts and knowledge exchange to combat cyber threats effectively. Open dialogue promotes the sharing of insights and ideas, leading to stronger cybersecurity practices.

It is crucial to be aware of the potential misuse of AI by malicious actors. AI can facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. However, Kaspersky highlights that advanced security solutions, incorporating machine learning, can identify and mitigate such attacks.

User awareness and education are essential to counter AI-enabled cyber threats. Kaspersky underscores the importance of educating users to understand and effectively respond to these threats. Combining advanced security solutions with user education is a recommended approach to tackle AI-enabled cyber threats.

In conclusion, Kaspersky’s approach to AI-enabled cybersecurity encompasses leveraging machine learning, maintaining transparency, safeguarding systems, respecting user privacy, and promoting collaboration and user education. By adhering to these principles, Kaspersky aims to enhance cybersecurity practices and protect users from evolving threats.

Dennis Kenji Kipker

The discussions revolve around the integration of artificial intelligence (AI) and cybersecurity. AI has already been used in the field of cybersecurity for automated anomaly detection in networks and to improve overall cybersecurity measures. The argument is made that AI and cybersecurity have been interconnected for a long time, even before the emergence of use cases like generative AI.

It is argued that special AI regulation specifically for cybersecurity is not necessary. European lawmakers are mentioned as leaders in cybersecurity legislation, using the term “state-of-the-art of technology” to define the compliance requirements for private companies and public institutions. It is mentioned that attacks using AI can be covered by existing national cyber criminal legislation, without the need for explicit AI-specific regulation. Furthermore, it is highlighted that the development and security of AI is already addressed in legislation such as the European AI Act.

The need for clear differentiation in the regulation of AI and cybersecurity is emphasized. Different scenarios need different approaches, distinguishing between cases where AI is one of several technical means and cases where AI-specific risks need to be regulated.

The privacy risks associated with AI development are also acknowledged. High-impact privacy risks can arise during the development process and need to be carefully considered and addressed.

The struggles in implementing privacy laws and detecting violations are mentioned. It is suggested that more efforts are needed to effectively enforce privacy laws and detect violations in order to protect individuals’ privacy.

While regulation of AI is deemed necessary, it is also suggested that it should not unnecessarily delay or hinder other necessary regulations. The European AI Act, with its risk classes, is mentioned as a good first approach to AI regulation.

The importance of cooperation between the state and industry actors is emphasized. AI is mainly developed by a few big tech players from the US, and there is a need for closer collaboration between the state and industry actors for improved governance and oversight of AI.

It is argued that self-regulation by industries alone is not enough. Establishing a system of transparency on a permanent legal basis is seen as necessary to ensure ethical and responsible AI development and deployment.

Additional resources and stronger supervision of AI are deemed necessary. Authorities responsible for the supervision of AI should be equipped with more financial and personnel resources to effectively monitor and regulate AI activities.

The need for human control in AI-related decision-making is emphasized. Official decisions or decisions made by private companies that can have a negative impact on individuals should not be solely based on AI but should involve human oversight and control.

Safety in AI development is considered paramount. It is emphasized that secure development practices are crucial to ensure the safety and reliability of AI solutions.

Lastly, it is acknowledged that while regulation plays a vital role, it alone cannot completely eliminate all the problems associated with AI. There is a need for a comprehensive approach that combines effective regulation, cooperation, resources, and human control to address the challenges and maximize the benefits of AI technology.

Jochen Michels

During the session, all the speakers were in agreement that the six ethical principles of AI use in cybersecurity are equally important. This consensus among the speakers highlights their shared understanding of the significance of each principle in ensuring ethical practices in the field.

Furthermore, the attendees of the session also recognized the importance of all six principles. The fact that these principles were mentioned by multiple participants indicates their collective acknowledgement of the principles’ value. This shared significance emphasizes the need to consider all six principles when addressing the ethical challenges posed by AI in cybersecurity.

However, while acknowledging the equal importance of the principles, there is consensus among the participants that further multi-stakeholder discussion is necessary. This discussion should involve a comprehensive range of stakeholders, including industry representatives, academics, and political authorities. By involving all these parties, it becomes possible to ensure a holistic and inclusive approach to addressing the ethical implications of AI use in cybersecurity.

The need for this multi-stakeholder discussion becomes evident through the variety of principles mentioned in a poll conducted during the session. The diverse range of principles brought up by the attendees emphasizes the importance of engaging all involved parties to ensure comprehensive coverage of ethical considerations.

In conclusion, the session affirmed that all six ethical principles of AI use in cybersecurity are of equal importance. However, it also highlighted the necessity for further multi-stakeholder discussion to ensure comprehensive coverage and engagement of all stakeholders. This discussion should involve representatives from industry, academia, and politics to effectively address the ethical challenges posed by AI in cybersecurity. The session underscored the significance of partnerships and cooperation in tackling these challenges on a broader scale.

Moderator

The panel discussion on the ethical principles of AI in cybersecurity brought together experts from various backgrounds. Panelists included Professor Dennis Kenji Kipker, an expert in cybersecurity law from Germany, Professor Amal, the Executive President for the AI Movement at the Moroccan International Center for Artificial Intelligence, Ms. Nushin, a Senior Security Researcher from Kaspersky in Australia, and Ms. Anastasia Kazakova, a Cyber Diplomacy Knowledge Fellow from the Diplo Foundation in Serbia.

The panelists discussed the potential of AI to enhance cybersecurity but stressed the need for a dialogue on ethical principles. AI can automate common tasks and help identify threats in cybersecurity. Kaspersky detects 325,000 new malicious files daily and recognizes the role AI can play in transforming cybersecurity methods. However, AI systems in cybersecurity are vulnerable to attacks and misuse. Adversarial AI can attack AI systems and misuse AI to create fake videos and AI-powered malware.

Transparency, safety, human control, privacy, and defense against cyber attacks were identified as key ethical principles in AI cybersecurity. The panelists emphasized the importance of transparency in understanding the technology being used and protecting user data. They also highlighted the need for human control in decision-making processes, as decisions impacting individuals cannot solely rely on AI algorithms.

The panelists and online audience agreed on the equal importance of these ethical principles and called for further discussions on their implementation. The moderator supported multi-stakeholder discussions and stressed the involvement of various sectors, including industry, research, academia, politics, and civil society, for a comprehensive and inclusive approach.

Plans are underway to develop an impulse paper outlining ethical principles for the use of AI in cybersecurity. This paper will reflect the discussion outcomes and be shared with the IGF community. Feedback from stakeholders will be gathered to further refine the principles. Kaspersky will also use the paper to develop their own ethical principles.

In summary, the panel discussion highlighted the ethical considerations of AI in cybersecurity. Transparency, safety, human control, privacy, and defense against cyber attacks were identified as crucial principles. The ongoing multi-stakeholder discussions and the development of an impulse paper aim to provide guidelines for different sectors and promote an ethical approach to AI in cybersecurity.

Session transcript

Moderator:
the meeting to order. Let me maybe just start by introducing all the speakers from the panel that we have today. We’ll start with my left. We have Professor Dennis Kenji Kipker from the University of Bremen. He’s an expert in cybersecurity law from Germany. And I have on my right Professor Amal, who is Executive President for the AI Movement, the Moroccan International Center for Artificial Intelligence, Morocco. And then on my far left would be Ms. Nushin, who is Senior Security Researcher, Global Research and Analysis Team from Kaspersky in Australia. And of course, on my far right, last but definitely not the least, Ms. Anastasia Kazakova, Cyber Diplomacy Knowledge Fellow from Diplo Foundation, flown in from Serbia. And myself, I am Jeanne Sujin Gan, Head of Government Affairs and Public Policy for Asia Pacific, Japan, Middle East, Turkey, and Africa regions from Kaspersky. Well, today’s workshop is titled Ethical Principles for the Use of AI in Cybersecurity. And of course, by way of a background and setting of the context, we basically are currently witnessing a rapid development of AI around the world for some time now. And it really has the potential to bring many benefits to the world as we have all probably experienced on a day to day basis, including enhancing the level of cybersecurity. AI algorithms help with rapid identification and response to security threats, and automate and enhance the accuracy of threat detection, for instance. And this is something that we experience in Kaspersky because we are a cybersecurity company. But of course, While numerous of these general ethical principles and foundations for AI have already been developed by various stakeholders, for example, in 2021, the UNESCO actually adopted the recommendations on the ethics of AI. However, the growing use of AI and machine learning components actually in cybersecurity makes ever more urgent the need for ethical principles of AI development, distribution, and utilization in this domain. Due to the particular opportunities, but also risks of AI in cybersecurity, there is a need for a broad dialogue for on such specific ethical principles, which we felt today is a good opportunity for us to sort of discuss that. And also for this reason, we at Kaspersky actually has developed initial ideas regarding aspects that should be taken into account there. And of course, these will be discussed in today’s workshop. So just to sort of run you through the structure of the workshop and what we plan to do in terms of our agenda today, we’re gonna start in a moment to run some survey with our audience today, including those who have dialed in online with two poll questions, which I’ll ask my colleague, Johan, to pull out in a moment, followed by, you know, our speakers being asked the first round of questions, and then we’ll take some questions from the floor as well. And before we end the session today, so I promise, you know, our panel of speakers are really experts in their respective domains, and put together, we’re gonna expect some very good discussions. So without further ado, let me just invite Johan, who is joining us online, and we should be able to see him to run the first online poll question. Yes, Johan, we see you. Thank you.

Jochen Michels:
Yeah, I spotted the poll.

Moderator:
Yes, and we can hear you too. Very good. So the first question, Johan will put up, is, in your opinion, is the use of AI in cybersecurity more likely to strengthen or weaken the level of protection? In your opinion, is the use of AI in cybersecurity more likely to strengthen or weaken the level of protection? Of course, we have got options for people who are participating in the poll. Of course, the first option is that it will strengthen protection. Second, it will weaken protection. And the third one is, in the name of democracy, we allow you to say you don’t know. So let’s just give this a moment and I will wait for Johan to… Ah, looking good. Okay, in your opinion, is the use of AI in cybersecurity more likely to strengthen or weaken the level of protection? I think we have got 62% who have said that it will strengthen protection. Let me just write this down. 20% say that it will not, it will in fact weaken. And 20% have exercised their right to say that they don’t know. That’s good. And I think this is something that we will flesh out in a little bit with the presentations from our speakers. I would also want to just invite Johan to put up the second poll question. I think we are ready to close this poll. Let’s pull out the second poll question. We only have two to start off before we get into the panel discussion. So let’s call up the second poll question. The second poll question. of the technique, but I will do so soon. Yeah, I will do so. The second poll question is what should prevail? Yeah, we see it. Thank you. The second question is what should prevail in AI regulation specifically for cybersecurity? Of course, the answers include, number one, it should be regulated as heavily as generative AI. Second, there is no need for regulation. Voluntary adherence is best. Ethical principles would do just good. And of course, the third option would be existing cybersecurity regulation need to be updated to account for AI technologies. I’m not sure if the poll is working well off with the online audience. Let’s hear from Johan. It’s working, yes. Fantastic, thank you. I will wait some further seconds and then I will end poll. Thank you. Okay. Interesting, interesting. What should prevail in AI regulations specifically for cybersecurity? Only a single choice was allowed and I think we’ve got 38% of our audience saying that it should be regulated as heavily as generative AI. Nobody selected, no need for regulation. So I think we have, well, at least some agreement there. And 63% are saying the existing cybersecurity regulation need to be updated. That’s interesting. Let’s just park that aside for a while. I think, thank you, Johan. We’ll have you back with us later on in today’s session. We can close the poll. Thank you, Johan. Now, I think I’m going to be opening up some questions later on to our panelists, but I would first call on Nushin to perhaps, she’s got some slides for us also. Some slides, yeah. And I’ll just invite Nushin to please deliver some short remarks, her impulse speech on opportunities and risks of AI and cybersecurity and what ethical principles she feels should be developed to promote the opportunities and mitigate the risks. Nushin, please.

Noushin Shabab:
Okay, thanks, Jenny. I’m not sure if the slides, okay, great. So as my colleague perfectly stated and most of the audience agree, AI and in particular machine learning has actually helped to strengthen cybersecurity in a lot of ways. We have been using machine learning techniques in our products at Kaspersky for a long time. So it’s not something new for us, but as we have always had this concern about the ethical principles of using AI and machine learning in cybersecurity, we thought to use this opportunity to share a little bit about some of the basic principles that we believe that are important in, sorry, in the use of AI in cybersecurity. And we want to have a discussion today and yeah, maybe develop these principles further. Let me start with the first principle. So the first one is transparency. We believe that it’s important and it is the user’s rights to know if a cybersecurity solution has been using AI and machine learning and the companies, the service providers need to be transparent about the use of AI. We have a global transparency initiative. And as part of this initiative, we have transparency centers in different countries in the world. And the number is actually growing. We are opening more centers and in these centers, stakeholders and customers, enterprises, they can go and inspect and visit the centers and look at the code of our products, including how AI and machine learning has been used in our products. So we commit to being transparent and making sure that users know and consent to their data and their contribution to our network is transparent. And they are aware of that machine learning techniques are used in the products. Number two, safety. So when it comes to the use of AI and machine learning in real world, there are actually a lot of ways that these systems can be misused by malicious actors to make them make mistakes deliberately. So there are various techniques that the attackers can use to try to manipulate the outcome of machine learning systems and algorithms. That’s why we believe that having safety of the AI and machine learning systems in mind is very important. And towards this principle, we have a lot of security measures in place, like auditing our systems with machine learning, reducing the use of third party data sets for the training for machine learning systems. and also a lot of other techniques, such as making sure that we favor the cloud-based machine learning algorithms to the ones that are actually stored and deployed on the user system. Number three, human control. So we all agree that AI can help a lot in a lot of areas in cybersecurity. For example, in improving detection of malicious behavior, in anomaly analysis, and so on. But when it comes to sophisticated malwares, especially with advanced persistent threats, it’s very important to understand that these type of malwares, they mutate, they adopt different techniques, encryption, obfuscation, and so on, to actually bypass machine learning and AI systems. Because of this, we always have human control over our machine learning systems. And we believe that it’s important to have an expert that has good knowledge and understanding, and is backed by a big data set, big data of cyber threats, to supervise the outcome of machine learning algorithms. That’s how human control has been always there for the systems that we use machine learning for. Number four, privacy. When we talk about big data, and data from cyber threats, it always comes with some sort of information that can be considered as personal identifier data. So we believe that it’s users’ right to have privacy on their personal data. That’s why we have a lot of measures to make sure that the privacy of users are considered when it comes to machine learning algorithm, and the data that is used to train these algorithms. By many ways, like pseudonymizing, anonymizing, reducing the data collection from users, removing personal identifiable information from URLs, or other data that comes from user systems. Number five, develop for cybersecurity. So as our mission to create a safe world, we are committed to only use and provide services that work in defense. So along with this principle, we have the services that use machine learning and AI developed only for defensive practices. And we encourage other companies to join us in this principle too. Last but not least, that’s actually why we are here, and we have this discussion here. We are open for dialogue. We believe that it’s only through collaboration between various parties, and between everyone in the industry, and in government sector, that we can truly achieve the best result, the best protection for users and user data against cyber attacks and cyber threats. So that was it. Thank you.

Moderator:
Thank you very much, Dushan. I think that sort of, I hope, sets the stage and sort of sets the tone to today’s discussion, because we really are focusing For those who’ve just joined us, we are focusing our workshop today, really discussing the ethical principles for the use of AI in cybersecurity. And also, I think I just want to take this time to sort of hear from a more technical scientific perspective from Amal on how can the microphone be. How can AI or machine learning techniques contribute to cybersecurity and which issues can emerge while using AI techniques for cybersecurity and how can we solve these issues? I think you also have some slides, if we can put up some slides. Yes, we see them.

Amal El Fallah Seghrouchini:
Hello, everybody. I am very happy to talk about AI in cybersecurity. And I think that there is a need of regulation like most people voted earlier. So, my presentation will be very short, even if there are a lot of points. But mainly, I would like to emphasize where AI can be used in cybersecurity, because the ethical problems comes from the way we will use AI in cybersecurity. So, the context is that, as you all know, cybersecurity is a very huge problem for all software around. And in this presentation, as Jeannie said, I will address some points related to how AI is included in cybersecurity systems. So, as you know, Kaspersky detects like 325,000 new malicious files every day, and this comes from a report FireEye in 2017, so I think today there are much more. The problem with classical methods for cyber security is that there is slow detections and also slow neutralizations. And what we expect from AI is to enhance and transform cyber security methods by providing predictive intelligence and long life cycle of the software. So the role of AI more specifically in cyber security is twofold. The first thing is that AI can automate common cyber security tasks like vulnerability management, threat detections, et cetera. And also thanks to AI, we can identify threats in large data sets that have not been analyzed manually. So as you can see, cyber security and AI is a national security priority by the NSF, NSTC and NASA today. So what I want to present is that there are two kinds of AI. The first boxes in the left represent what we call a blue AI. And in the right, you have the red AI. The blue AI presents some opportunities for cyber security. For example, AI will help to create smart cyber security. For example, effective security controls, automatic vulnerability discovery, et cetera. And also in the fourth point, by using AI, you can fight cyber criminals. For example, for fraud detections, analysis, intelligence encryption, fight against fake news, et cetera. And this is the good news for using AI in cyber security. But as you know, cyber security, these techniques or these AI systems are also vulnerable and raise a lot of challenges like robustness, vulnerability of algorithms, of AI algorithms, and also some misuses of AI. For example, by creating fake videos, AI-powered malware, smarter social engineering attacks, et cetera. So AI for cyber security, I will go very fast, don’t worry, AI in the domain of cyber security will help in all these steps. And this is the NIST-CSF framework, how to identify, understand your assets and resources, protect by developing and implementing appropriate protection measures, detect by identifying the occurrence of a cyber security event, respond by taking action if a cyber security event is detected, and finally restore activities that aim to maintain resilience plans. So this is the lifecycle of cyber security, defensive cyber security, and AI can be used at all the stages of this lifecycle. So I can say that the ethical issues of using AI in cyber security can be studied through these five steps. For example, if you identify your asset and you should be sure that your resources are resilient, are not vulnerable, to protect also and detect, et cetera. So how do we implement all this by using AI techniques? I will not detail all the phases. But for example, in identification step, we will use some tasks. If I address some tasks of cybersecurity like fuzzing, pen testing, et cetera, the techniques of AI that I will be able to use and they are used in practice today are deep learning, reinforcement learning, deep learning and reinforcement learning for classification of bugs, and also some NLP and methods of machine learning. This means that all the problems that come with AI techniques will be found again in dealing with cybersecurity. So this is only a one step identification and we can deploy, I don’t have time, this is why I cut, but we can do the same for all these phases in cybersecurity. System. So now we can use also techniques from cybersecurity to securize or to make AI system more robust. And this is a challenge of real AI. Robustness, vulnerability of AI algorithms. For example, there are very well known adversarial machine learning techniques that can be used to secure or to attack AI systems and algorithms. Also, this is why I say that adversarial AI attacks AI systems. AI cannot be made unconditionally safe like any other technology. So we have to take care that our AI system used in cybersecurity will not be attacked by malicious attacks or something else. This is a very famous example in computer vision. If you look at the pictures, they are similar, but the AI system will detect different things. It’s just a question of changing one pixel sometimes in picture, and you can have a different output. For example, you have the right one, the left one, you can see a car, this is correct, but in the left one, you can see ostrich. The system will recognize, but people cannot, I mean, human being cannot see the difference, but machine learning algorithm will make the difference. Okay, so last thing is misuse of AI, for example, by creating fake videos, they are very famous today, and AI-powered malware, smarter social engineering attacks, and so on. And I will end with this. So we know today that AI can create new kinds of cyber attacks for phishing, cyber extortion, automated interactive attacks, etc. For example, using generative AI in cyber extortion is something very common today. So the need of regulation is crucial, I mean, it’s very important. We inherit all the problems, the issues coming from software, but we have also some very specific problems for cyber security domain. And AI will bring major ethical and regulatory challenges also in cyber security. So my conclusion is that I we need ethical and regulatory consideration for cybersecurity systems. Delegations of control, we have to find a consensus between human total control and total autonomy of AI systems. Delegations of control will be granted with this sole objective and not towards total autonomy of the AI in cybersecurity. And cybersecurity actors are still looking for an adequate legal basis to conduct their daily testing practices for privacy and data governance, for example, in cybersecurity. Thank you for your attention.

Moderator:
Thank you very much. That was wonderful. And I’m already, I’m madly taking notes because I’m gonna have to synthesize all of this. But before I do so and really do a full-on panel discussion with perhaps some questions from the floor, I’d like to just pass the time over to Anastasia who will be talking with us about some of the current trends and reflections on AI policies in particular in the field of cybersecurity. And maybe an impulse statement by Anastasia on the chances and risks and the value of ethical principles.

Anastasiya Kozakova:
Thank you very much. It’s a pleasure to be here. I represent the civil society organization. I work on a policy before I work in the private sector as a cybersecurity expert also focuses on the policy. And in my current work, we do also discuss with the multi-stakeholder is how the cyber norms for responsible behavior could be implemented. And while we are not solely focused on AI, we largely focus on the norms for responsible behavior in the context of international security and peace in the context of overall cyber stability. AI policy is definitely. getting more and more attention. So thanks so much for the previous speakers. I think we’ve seen indeed that AI already entered the world of cybersecurity. I think quite many years ago, it helps to enhance detection and helps to collect intelligence for better analysis of the cyber threats. And I think many, if not all cybersecurity companies these days, especially advanced companies do apply to some extent AI in the methods, how they deal with the threats and what the intelligence they produce for the customers. The big question though, how does it work? What kind of a data do the companies use for this? How actually the AI, which still quite unknown for many even who develop AI in terms of the mysterious black box, what happens there? If it makes decisions, how it made a particular decision? So all this really important questions, I think one of the key fundamental challenges that not only on the minds of the policymakers but on the minds of the users, but also on the minds of those who develop AI and AI based solutions. And in this regard, yeah, the human control and retaining human control, I think all the speakers have said already, it’s really fundamental. AI should not be autonomous because we cannot allow something that we don’t know exactly or completely to allow it to somehow make so much big impact on our human life. And we see that humans are afraid of this, right? But even though the policymakers already have started talking and discussing how to make AI more predictable, transparent, ethical, the question is still if, okay, we retain the human control, we give back the control to humans, those who develop, who sort of academics who would like to see what are the algorithms. I use that, the question is still quite challenging. How this control will be split up between actors and which actors would be sort of on the table? Who would, in the end, retain the biggest control among humans? Would it be the developers of the AI or the policymakers of the academia? How to ensure that actually the data that has been collected on a massive scale for the AI is not monopolized by one actor or just a few actors in the market? How to make sure that academia, again, civil society has access to analyze what kind of a data and under which policies, which processes it’s been used and data protection, security are properly mitigated. So these are really, I think, open questions. It’s really difficult questions. They are very contextual questions. It will speak about the AI in terms of the impacts for society, for economy, for security, for international security. All of these questions will be, I think, decided on the particular context and it’s really important and it’s really challenging, therefore. One of the other challenges, I think, in all the emerging policies or even the regulations to make it more transparent, more ethical is, of course, to define AI. There’s, I think, no universal definition so far what’s the AI is. And the policymakers, I think, have really struggled to carefully scope future laws to pin down what AI exactly entails in a particular context. So one of the aspects that’s really important for policymakers and for legislators to make sure that the laws focus on the outcomes and expectations but not the technology itself. It will help actually to make this laws more future-prone and it will focus on what actually concerns people. People, I think, users, we as just ordinary users. we don’t want to know how the code is reading for that. We do want to know how this code will impact our lives, how this will impact our security, how this will impact our jobs, or the community or society, or the broader scale. The other aspect that I also wanted to mention that even though currently we kind of, I think, name lots of policies or regulations narrowly in the field of cybersecurity, in terms of AI and cybersecurity. And here, I would actually agree with the audience that participated in a poll. And I think most of the people said that it’s rather the existing cybersecurity regulation needs to be strengthened rather than new regulation on the own AI cybersecurity needs to develop. And I agree with this. I think that it’s really important to see broadly on a more on a horizontal level, how AI is a one more technology, is a one more piece of code in the end, even though it’s really complicated, fascinating, and it’s difficult piece of code. But still, how is actually produces, which impacts it produces for different stakeholders. And in this regard, they are already emerging in existing laws to regulate the security of data in particular contexts, to regulate the security of the critical infrastructure, and so on. And AI, I believe, complicates the picture, but doesn’t require a new approach from scratch. So yes, it complicates a lot the current picture and it requires innovative, probably discussions, but still we need to look at the, again, at the impacts, what the technology gives to us. I also wanted to say that we do see the emergent discussions in terms of the. the impacts that we’re on the international security and peace likely within the UN and within the regional fora, but still they’re not that extensive as they should be. The problem is that still, I think the international community and those who engaged in this discussions, including diplomats, still lack substantial evidence how many advanced AI tools, if they exist, can be used for both defense and offensive purposes. There’s still the knowledge is very limited. There’s a lot of secrecy about this. It’s the knowledge that is not accessible to a broader public, to a broad public, or to even a limited group of the academics, unfortunately. So there are at the same time growing interests and calls of the international community to produce the sort of the rules of the road, how to regulate AI in terms of cybersecurity, especially where AI can be used in the military context on the battlefield. And I think it’s really important, but hopefully we will see more probably dynamics. But so far, again, I highlight, to have this discussions more evidence-based and more substantive, we need to understand what kind of the tools already out there and to increase the transparency in terms of the different types of the actors that are involved in cyber activity. And I would probably conclude to this question saying that overall speaking of the regulations, I think it’s already evident that the large markets such as the EU, US, China, other countries probably will pass conflicting regulations concerning AI quite soon. I think we heard yesterday from the US diplomat that US is preparing the executive order on the artificial intelligence soon. And the G7 leaders, they also have committed to establish a set of the guiding rules for the AI. So, we see the appetite, we see the appetite to actually split, to define the roles. Who will have the ultimate power to define the impacts of the R in the future? Will it be the governments? Which governments? Will it be the vendors, the companies? And how to make sure that it’s just the one and a few companies? The problem is that if it happens, and if more fragmentation happens in this field, how it happens overall in cybersecurity and in cyberspace, unfortunately, it will make, I think, less opportunities for different communities to truly benefit from learning what the AI could bring to us as the international community, as a society. There are still beliefs, I think, and hopes that vendors or organizations or companies could take a lead and organize sort of the consortium and to make a self-voluntary approach, a self-regulation approach to be more transparent. And what we just heard from Kaspersky, I think it’s a good initiative. We hear more and more initiatives, especially companies involved, extensively involved in AI to be more active and saying that what kind of the data they use, how they process this data. And I think there’s still a hope, optimistic hope that if this conversation continues, a bottom-up approach would lead. And in this regard, there will be more opportunities to avoid the risk of conflicting laws, of the fragmentation in this field, and probably to make sure that still the access to this technology, to the research, to the discussion will be much broader than just within the borders of one particular country of a few countries. But I think that’s still the open questions. There’s many open questions and all of the, to some extent, all the emergent policies try to address this in terms of the result. what conclusions will come, I think that’s an open question. So let’s see how humans will be optimistic or pessimistic solving this. Thank you.

Moderator:
Thank you, Nastia. I would just want to finish off this, you know, preliminary round of remarks with inviting Dennis to sort of speak about, you know, can AI be legally regulated at all, given the current political and technical difficulties with the AI Act in Europe, for example, and aren’t we destroying innovation through legal, through over-regulation? So maybe I’ll just hand the time over to Dennis.

Dennis Kenji Kipker:
Yeah, thank you very much, Jeannie, and thank you for the possibility to speak here today. As a professor for cybersecurity law, I definitely have a legal perspective on the whole topic and in regulating AI. We definitely need to draw a clear line, as it was already noted by the previous speakers, because we are not talking just about general AI regulation here, but about a very specific use case. And that means an example, just a piece, a slice of a broad use case scenario. And in my opinion, AI and cybersecurity are two topics that have already come together a long time before use cases like generative AI became public in the recent month. And for example, AI is used with regard to cybersecurity in automated anomaly detection and networks. And I already wrote some publications about that six years ago. And this, of course, begs the question regarding this very specific use case, do we need a special AI regulation for cybersecurity in the future? And my answer with regard to that is quite clear. I would say, no, this might be interesting, but to justify this, in my opinion, we need to differentiate again, because there are three different use case scenarios that we will have to talk about and that we’ll have to take a closer look onto. So the first one is AI is used to improve cybersecurity. The second one, AI is used to compromise cybersecurity. And the third one, AI in general is being developed. And the first two scenarios are from a legal perspective, quite easy to answer. So when AI is used to improve cybersecurity, is technically one of several possible measures that can improve cyber resilience. For example, European lawmakers, who in my opinion, currently lead the world in cybersecurity legislation, for example, with the new network information security directive. that became effective in the beginning of this year or, for example, also a draft version of the Cyber Resilience Act. We have a lot of upcoming cybersecurity regulation and the point is regarding this cybersecurity-specific legislation, the European lawmakers have so far avoided exclusively naming specific technologies to realize an appropriate level of cybersecurity and have instead of that used the general term state-of-the-art of technology, which is a general guideline in many legal regulations of technology, such as cybersecurity as well. So it means, for example, private companies, public institutions that implement cybersecurity have to fulfill the state-of-the-art of technology to be compliant with the legal rules. And this, in my opinion, as a lawyer, is very fitting because a law will never be able to conclusively map all the technologies that will be developed in the future that are needed, especially here for cybersecurity in a casualistic sense, due to all the rapid technological development that we have. And we have very fast development cycles, not currently, but also in the future. And this is also widely accepted as opinion by the scientific community. The second use case scenario that I would like to mention, so that means when cyber attackers use AI to compromise IT systems, this is also not a specific AI cybersecurity scenario, because again, as with defending against cyber attacks, attackers may well use different technologies to successfully attack IT systems as well. And these are typically criminal offenses. And in many countries, in various countries all over the world, we have also cyber criminal law. And these criminal offenses in the national cyber criminal legislations are being interpreted. And as a part of this legal interpretation, They already cover the use of AI as a technical means of attack without the need for explicit regulation. And now we come to the third point of this very short statement. The third aspect is not directly related to cyber security, but to the development of AI. We already heard some statements about development of AI, of how keeping AI secure when it is being developed. And of course, this is an important question that we also have to address from a legal perspective. But this development issue of AI cannot be considered a cyber security specific issue. So it requires a focus. And of course, it must be ensured, for example, as Amal mentioned, that AI systems are not themselves compromised at this very important stage. And that’s something that we’ve talked about in several panels during this conference. And this is also what the European AI Act, as a regulation that has also been mentioned already for several times, for example, seeks to achieve. When it explicitly in its draft version, that was made public last year, stipulates that AI itself must be cyber secure. Therefore, developers of AI must provide safeguards to prevent, for example, manipulation of training data sets or to prevent hostile input to manipulate an AI’s response. And this is also something I guess Amal mentioned. But this, in my opinion, is just one facet of secure and safe AI development and not really a use case for implementation of AI and cyber security. So to come to a conclusion as a result, in my opinion, the regulation… of AI and cyber security must clearly differentiate between scenarios in which AI is only one of several possible technical means and the regulation of AI specific risks themselves. I think this is an important point which has to be taken into the policy debate and into the future legal debate as well. Thank you.

Moderator:
Thank you very much, Dennis, for that. So, I think so far what we have heard beginning with the ethical principles that were sort of put forward by Nooshin on transparency, safety, human control, privacy, defensive cybersecurity, and being open for dialogue have pretty much been also agreed upon in various different ways. First of all, of course, we heard from Amal about the framework, the five steps to defensive cybersecurity, the lifecycle, identifying, protecting, detecting, responding, and restoring, which also, of course, sort of dovetails with various aspects of those ethical principles which were put forward by Nooshin on safety, human control, privacy, and defensive cybersecurity. And then, of course, also we heard from Nastia about transparency, elements of transparency as well as, of course, the multi-stakeholder cooperation perspective to things, amongst other things, of course. And, of course, Dennis had also highlighted some of the limitations of regulation and the need for some ethical principles that overlay. And we’ll talk a little bit more about all of these in a short while. But I thought I wanted to take this time to open the floor to some possible questions because otherwise I am going to ask a round of questions. I see that there are no… I’m just going to ask Johan if you have got any questions from the online participants. Otherwise, I would be quite ready to launch into my round of questions. Yes, there is a question there in the room. Can I just ask you to take the mic? Yeah. Ah, you have… Okay. You have to turn on the mic. Push the button up. Thank you.

Audience:
All right. Thank you for the presentation as well. So, the question from my side, although the ethical question is more philosophical approach for sure. When I look at the cybersecurity, because the adversary is going to use adversarial AI and they don’t care about ethics. Now, for us to defend, and I see that… Detection might be where we might imply the ethical approaches, but when we are talking about response, especially about active cyber defense and engaging in responsive actions, implying ethical AI to counter an unethical adversarial AI actually might put us in a disadvantage. Maybe I would like to hear your approaches or your thoughts on this as well.

Moderator:
All right, maybe I’ll ask Nastia to take the question. Thank you for that question, first of all.

Anastasiya Kozakova:
That’s a good question. I think this question already exists before the AI right overall. If the organization that being is attacked, if the organization has the right and if it has a possibility, if it’s still the organization has the right to hack back, right? There’s all the discussions of the hackbacks. If they’re illegal, if they are lawful, if they could be legitimate a particular situation, I think in most of the countries, the governments and industry came to the conclusion that organizations probably shouldn’t have this right. So the law enforcement that have the mandate per law should step in and actually if the organization asks for this help, so the law enforcement or other specialized agencies can investigate and then decide how to do, depending on what type of the actor, the organization dealing with, whether it is a cyber espionage, if the APT, it’s of course the matter of the international security of the relations of the two of the more countries, it’s really getting more critical, but if it’s a sort of really advanced complicated DDoS or it’s sort of deficient with the AI, right? So whether organization has this right, I think it will be really risky to go into this direction. But overall, as you said, it’s a really philosophical how we define ethics in this regard and why we… as a good actress need to be ethical? Was a lot of bad actress that behave unethical? Well, again, I think it’s a really risky conversation that might take because we need to define what’s our goal. Our goal is to enhance sort of security for all, some sort of optimal collective security. And our goal is to enhance stability. Whether if us as a good actress behaving unethical to protect, even to protect ourselves is a part of the security and stability in the end. I think not likely. So we still need to, well, abide to international law, domestic law, national law, and overall sort of the rules to make sure that if there’s a bad actress that acting as a bad actress, we sort of stay on the side where we do understand the limits of our actions. But I don’t want to conclude on a pessimistic note, but still on a hopeful note, the challenges that we see in the cyberspace, they of course getting more and more sophisticated and it’s not purely technical thing, right? And this is what makes it really difficult. If that’s technical, so the technical people will solve it. The problem is with much more nuanced, sometimes policy solutions with the international security solutions. So in this regard, I think we need as a humans who try to protect ourselves be, I think even more creative. Yes, that’s difficult, but we have to do so. Be creative in terms of focus on what we already have for centuries. It’s international law. Again, it’s the national law, but also be creative how the new types of responses could be developed. in this regard, how we could enhance cooperation between communities, between vendors who could share the knowledge with the outputs of the research, or even the government despite the current geopolitical situation, how could we increase our chances to develop those creative solutions to address the threats that are getting more and more complicated to us. Again, that’s difficult, but I think there’s a lot of hope that it will be developed more and more, because I think we all want, in the end, security for us all.

Moderator:
Thanks for that. I thought I’ll also pay some attention to the questions from our online participants. There was one question from Yasuki Saito, and I think I’d like to ask Nushin to take this question. It says, what do you think of using LLM or a chat GPT to deceive human users and force their PCs to be infected by malwares? Is there any good way to avoid such things? Nushin?

Noushin Shabab:
Okay. I guess we heard from Amal about this particular type of attack, like advanced social engineering enabled by AI, and this is a perfect example to use an AI system to make a more convincing social engineering conversation or an email or a message that looks very benign and doesn’t raise any suspicion. This is just one example of how AI can be misused by malicious actors, but I would say still with an advanced and sophisticated security solution, obviously having machine learning techniques implemented into the solution can also help to identify a spear phishing email or even a social engineering attack. But also, apart from having an advanced solution to address and to protect users against such attacks, I would say that talking and raising awareness about such attacks, because they, I mean, I’m sure that the attackers, especially with the use of AI, they can bypass a lot of services. It’s much easier, it would be much easier to understand if the victim was the target environment and how the environment is, what are the softwares, what are the security measures in place in their target environment, and try to figure out a way to bypass that. So I would say something to complement an advanced solution would be just education for common users and also for employees in organizations to understand the risks and understand how AI can help in making a more convincing conversation or a more convincing spear-phishing email, and make sure that users are aware and they don’t fall victim.

Moderator:
Thanks for that. So I think just taking stock of what we have so far, from the poll, from the survey results, and also from the discussion, I think first of all, what we’re hearing is that obviously AI and cybersecurity has produced a lot of benefits, and we can’t run away from the use of AI and cybersecurity. But second of all, of course, it comes with costs, right? There are impacts, there are unintended consequences. And just now Amal actually brought up some statistics from Kaspersky several years ago about the number of new malicious files that were detected on a daily basis. And just thanks for bringing that up. I thought I could also… So give an update on the statistics as of today, actually Kaspersky uncovers and finds on a daily basis more than 400,000 new unique malicious files every day. And that’s not, that’s astounding. And when I talk about new malicious, unique malicious files, we’re talking about maybe one malware that infects 10,000 computers, let’s just say, does not count as 10,000, that’s considered, that’s counted as one, if it’s the same malware. So if we’re here, all of us sitting here in this room for an hour and a half for this workshop, we’re essentially talking about what, 27,000, 30,000 new unique malicious files uncovered by a single company like Kaspersky. So that’s astounding. So there are costs and there are benefits, to the use of AI in cyber security that we need to be concerned about. And that brings me to the third point, which is that, which is the reason for our discourse today that we’re having this, what we’re all talking about. And we discussed then, what are the role, what’s the role of laws and regulations and all that, right? And then we also hear that, we then start thinking about not just regulation, but what exactly are you regulating and why are we regulating? And then we also hear discussions about conflicting regulations, which are beginning to surface, right, globally. And so then what that brings us to would be that there are limitations to regulations. There are limitations to regulations. And as a lawyer, I’m saying this, that anything that is legal may also not be ethical. So do we then, right, take a step forward and then start thinking about ethical. principles beyond just legal frameworks. And that is, I think, where we are today. And I think we have a question from the floor. Sir, can I just ask that you take the mic and introduce yourself and give us a question? Thank you.

Martin Boteman:
Hi there. This is Martin Boteman. I’ve been talking with the DCIoT today as well. And one of the complexities that comes up when you talk about AI and cybersecurity, I agree with what has been said. But a complication is that security will require identity. And I can see specifically with AI that has a dual impact. One thing is that data, thanks to AI, become even more often personally identifiable than before. But the other thing is that AI can also help secure, as has been pointed out, but maybe also with the identity factor. So how do you deal with the dichotomy between identity need in the future going forward? There’s no way around it. At the same time, also privacy. And this is part of your legal considerations, of course, as well, and ethical. Thank you.

Moderator:
OK. All right. I will leave Dennis to take this question.

Dennis Kenji Kipker:
Yeah, of course. When developing AI, we have high impact privacy risks. And I think this is quite clear. From the European Union perspective speaking, we have a general data protection regulation of personal data also when AI is being trained. But as I mentioned, when it comes to the possibilities and also the problems of AI regulation, I think in general, we need to move away from trying to regulate every conceivable scenario and risk. We have risk, definitely. But this is not a typical thing of AI. So this is a thing that addresses the whole technology sector. And on one hand, full technology regulation will, in my opinion, never be possible. And on the other hand, administrative practices also raise the question, for example, of who should control and implement all these laws. Because you will need a lot of resources. And we see it with regard to data privacy authorities. I think not only in the European Union, but all over the world, that they have problems, that they are struggling in implementation of laws. There are always companies that are not compliant. And this is, of course, a question that is not AI. AI specific, legally, it has long been proven that what matters is not the severity of sanctions after a certain kind of violation, but the likelihood of detection of a violation. And I think this is where we need to work. So what this means, in my opinion, for AI in the wake of the current hype that we’ve seen since the beginning of this year, that we should not fall into a mindless, in my opinion, mindless regulatory debate that possibly ends up even delaying or even torpedoing the really necessary regulation. So we need definitely a core regulation, but we have to distinguish between things that are necessary and that are not necessary for the first start. And in my opinion, the European AI Act, and of course its draft version with its different risk classes, is a good first approach for the time being. Even, of course, it needs to be revised again, because we have seen this year that there have been some new upcoming risks. And since AI is also mainly not developed by states, but currently in the hands of just a few big tech players, mostly coming from the US, the cooperation between the state and the industry actors really needs to improve. And this is where we need to work on as well. So self-regulation by industries, in my opinion, alone, not enough. We need a system of transparency. We need more cooperation that needs to be established on a permanent legal basis. And when we talk about ethical principles, and this is also a part of the session, I think ethical principles can help, of course, but the authorities for supervision of AI, they must be stronger. So that means they need more financial resources. They will need more personnel resources in the future so that we can tackle all these problems.

Moderator:
Thank you. I think I’ll ask Professor Amal to also add on, and then Astier can do so as well. Thank you.

Amal El Fallah Seghrouchini:
Thank you for the question. I’m trying to answer the question about identity and security. And I think, in fact, when we talk about security, we are naturally interested in the identity of the person we try to secure, for example. But there are some initiatives around the world where we can, the purpose is to try to make a difference between identifier and identity of person. And this is very interesting because you can rely on. a trust third party to certify that, for example, that person is associated to that identifier. So we don’t have access to the identity of the whole identity of the person. And another very nice initiative is to avoid to have unique identifier of a person. This allow to not have access to 360 degree of the person itself. So it’s sectorial identifier that is associated with the same identity, which is associated through a third party trust to some person. You add all these layers to avoid the direct access to a person with all data of the person because anonymity of data is not enough today.

Anastasiya Kozakova:
I don’t know if this already answers your question, but I’m also curious to know what you think as well. The question that you ask is really, well, they are very specific, but they’ve really critical of course. And I would probably say not the most popular opinion, but I believe that regulations are not the only solution. Quite often I think regulations could be really slow and not that effective to address the challenges that we face, especially with AI. We still don’t know how the AI will impact us in a week. It’s really rapidly developing. While regulations are important in terms of the nudge developers, manufacturers of the products, tech companies to move in the right direction with the legal and the regulatory actions to put the incentives, right incentives on the market for them. I still believe that the industry has the capacity. and has the ability to do lots of really important things without policymakers and regulators being in the room. So for software, there’s a lot of initiatives going on sort of the software bill of materials as BOM. The idea is to increase transparency of the composition of the software that you’re using. If you take a cake, you need to know what kind of the ingredients there to make sure that it will not make any harm to you given your dietary specifics. So the same logic applies to software. Even if you’re the bigger company, you need to have a detailed documentation, updated, automated documentation that could be actually machine readable to understand what types of the co-components there, how could you use and if there’s a vulnerability, you could easier to find the co-component could be exploited. So I think the same logic could be applied to those who develop AI-based solutions, increase transparency of the components that you use, increase also the data documentation, document what type of the data sources, collection methods, processing techniques that you apply. Yes, it probably will be useful only to the most advanced customers and the large corporations, but these companies also do have their own users. And I think that will have indirect positive security back for us all. So hopefully it takes time, but I think it’s maybe more agile, rougher than weight, extensive regulation to be passed on.

Moderator:
Thank you for that, Nastia. And I think your point about the software bill of materials is really something that resonates with me because that’s also something that at Kaspersky we practice for our software. I think it is important to know the ingredients to the cake that you’re about to eat. I think Professor Amal wants to add on something and perhaps you wanna also give a response later on.

Amal El Fallah Seghrouchini:
Yeah, Professor Amal, maybe just go ahead first. Yeah. Because we are talking about ethics from the beginning and we don’t specify what do we mean with ethics. And I think ethics is not limited to data protection, but also we have to consider dignity to protect human rights. For example, when you detect some malicious attack, for example, you should be careful with the origin of that attack, fairness, privacy, and also informed consent. And my point is, what do we mean by informed consent? When people give some data, some information, interact with the system, like for example, in generative AI systems, people are not aware of the consequences of the tool they use. And they give consent, they think that they are informed, but in fact, they are not informed because most of people are very far from technology. And most of them have no idea on cyber systems. So what. What do we mean by informed consent? How do we protect dignity in these situations?

Martin Boteman:
Thank you for that. What we ended up with, it’s not emotions, it’s just my throat dry. What we ended up with this morning in the discussion was very much that of course legal isn’t enough. Legal is the last resort in a way. So, whereas we’ve been talking a lot about privacy and security by design, I think it’s important to realize that in AI context that is an extra challenge. But it’s a challenge we’re also facing, for instance, thank you to the reference to the European Union’s AI Act, but we’re also aware that of course the Algorithmic Accountability Act is coming up in the USA. And you see that that is ways where we may end up with AI not being just this magic, but something real and concrete where we can take responsibility for. I think that that’s an important element. So, thank you for your answers. And it’s just, we don’t know all the answers yet. I very much realized that. But the old principles of security by design and privacy by design remain important. We realize we live in a world where in some countries, identity is there to protect you and some others, it may make you a victim. So, thank you very much for your thoughts.

Moderator:
All right. I think, thank you for that. I think I am going to, I’m mindful of the time we have about 11 minutes left and I’m trying to economize the time left that we have because not forgetting that we also still have one more survey for our online participants before we conclude today’s discussion. So, I sort of just want to go down the row and maybe begin with Nushin. It’s the same question for all our speakers actually. So, I’m going to ask the question for each of you. Maybe just try to keep your remarks short, one minute, two minute max. Yeah. Which of the, which are the two most important principles in your view that definitely need to be followed in cybersecurity? The two most important ethical

Noushin Shabab:
principles. Yeah. That’s actually a very good question. The most, the two most important principles for me, I think, that there are the two main points that’s been discussed more than other principles today. So first one, transparency, so being transparent to the users and also to the rest of the community and world, what we do with the user data and how we implement detections and how we protect users, be it through a machine learning technique, an algorithm or more like traditional ways. And the second one is obviously privacy. We are in cyber security industry and we deal with targets and victims of cyber attacks. For us, it’s one of the most important aspects to protect users. And obviously, if we don’t take care of the privacy of user data ourselves, it doesn’t make much sense to try to protect them from cyber attackers, right?

Moderator:
So I would say transparency and privacy for me. Thank you very much, Nushin. I’ll just go down the room with Fadenis. I’m hoping you, I’m secretly hoping you will touch on some other principles,

Dennis Kenji Kipker:
but there’s of course the democratic… Yeah, that’s really a difficult question. So to make a long story short, as a scientist, I can say that even with paradigmatic events like AI, we should move to the level of factual argumentation. So this is something I mentioned several times also in my opening statement. We do not eliminate problems just by regulation alone. And this is my opinion and illusion, even if legislators and politicians might see it differently. And in cyber security, we need to clearly align ourselves with the three scenarios of AI. that I have also been mentioning in my opening statement. And in terms of the principles, I find it very difficult to just say we have two principles that are relevant because the use of AI, not only in cybersecurity, but everywhere has so many facets and different risks that we do not have approached yet. And I think one of the most important thing is that we have human control about decisions. And this is something which is also clearly described and this is also described with regard to the use of personal data, for example, or with regard to decisions of authorities, official decisions, or any kind of decisions of private companies that might have a negative impact on individuals that these decisions cannot be made only based on AI. And in my opinion, the second important principle, I would say is that safety comes first. We have to distinguish between security and safety. I think this cannot be done here in a few minutes, but when it comes to AI, we have a lot of use cases for the use of AI. And that means that security is connected very strongly with safety. And we should take a strong look onto all these safety issues because when the AI is not developed securely, we cannot have safe solutions as a result of the AI. So in my opinion, these would be the two most important principles putting on top of the principle of the one new machine set. Thank you.

Moderator:
That’s great. Amal, would you like to give us your two most important principles in your view?

Amal El Fallah Seghrouchini:
If we are talking about principles of ethics, I think. Yes. Okay. Because we are talking about ethics like if it’s a stamps we put on product and ethics is not that. Ethics is continuous debate and discussion about how things. will go ahead. So from my opinion, the first thing we have to take care of is how to preserve dignity and human rights in all these systems. And the second is to work to reach informed consent with population that use these systems. And this means that we have to be very didactical to explain things. For example, we have talked about accountability. Those are tools, accountability, privacy, data protection. All these are tools towards principles of ethics.

Anastasiya Kozakova:
So I guess I’m also expecting to answer this question. I think none of these principles alone do help to, as the users or as the overall sort of those who live in cyberspace, to have sufficient degree of security, right? Transparency alone, well, we know about the particular technology, what type of the code it’s used and all of that. We do have the policies, but what actually, how can this help us to be more secure and to feel more sort of secure and stable in cyberspace? So none of this principle alone actually helps to achieve what we want to achieve, but altogether, and many more, could increase our chances to have this optimal security. But overall, I think we as humans need to be guided with the principle that we should avoid producing harm to each other, to to others with any type of technology and AI is of course my exception here.

Moderator:
Thank you, thank you for that. I think my secret wish sort of came true and everyone sort of touched on the different principles but now I think it leaves us to sort of hear from our online audience as well. I’m just gonna invite my colleague Johan to pull out the final survey question because I think it’ll be interesting to hear what we have from the online audience. Basically the question is, please mark from one to six because we have six ethical principles. Please mark from one to six the significance of each ethical principle of AI use and cybersecurity. Six being most significant, very, very, very significant and of course, one being well to you the least significant of all six. But I do agree with Nastia also, everything sort of comes together. It’s, yeah, depends on how you formulate the principles. Amal is whispering to my ear. Let us wait some further minutes, seconds please for the poll. In the meantime, I just thought I’ll like to say, what are we gonna do about the ethical principles that are in, well, currently in a sort of draft proposals stage, right? So today we have heard ideas that were discussed. We have had some new suggestions that were made and the proposals will be further developed. It doesn’t just stop. Of course, the goal is to develop a basis that can serve as a guideline for industry, for research, for academia, for politics and civil society and developing individual ethical principles. So after this session, we will be publishing an impulse paper on ethical principles for the use of AI and cybersecurity, and it will also reflect the discussion results and will be made available for the IGF community as well. In addition, of course, the paper will be sent to our stakeholders to gather complementary feedback, and of course, Kaspersky will also further develop our own principles based on this paper and provide the best practices for the cybersecurity industry that we’re in. So now, thank you for putting up the results from the poll, Johan. So please mark from 1 to 6 the significance of each ethical principle of AI use in cybersecurity. I think we have… Johan, would you like to interpret these results for us, because there are many colors?

Jochen Michels:
Yes, there are many colors, and it reflects that all of the six principles are important, so there is no priority. All of them are mentioned by the different attendees, and it makes clear that what you said, Nushin, and also Dennis and Amal and Nastia, it is very important to take into account all the different principles and to start further multi-stakeholder discussion on that. So that’s the result of the poll.

Moderator:
Thank you very much for that. I think we can close the poll, and I will just take one minute to sort of wrap up. I think the key takeaways really are that the ethical principles all sort of come together as one and complement one another, and they need to be further developed beyond today’s discussion. And of course, like Amal had said, it really depends on how you frame it also, and that is something that we need to further develop. So when it comes to transparency, safety, human control, privacy, defense of cybersecurity, and being open for dialogue, these are by far, I think, equally important principles that even our online audience have also agreed. And I think the call, well, it remains for me to state then that the call for action would be that we need further international multi-stakeholder discussion on these ethical principles that we have developed and sort of designed. They’re not exactly rocket science, but I think it’s about collating all of them into one document that is coherent and makes sense for everyone. And of course, because we are, you know, a player in the cybersecurity field, then we’re of course particularly interested in developing such ethical principles for AI in cybersecurity. So I just want to take this time to thank all of our audience today and people who have asked questions as well. I hope, you know, it has also furthered this discourse, and also to thank all of our speakers, starting from Nastia, Amal, Dennis, and of course, Nushin, and myself. I’m Jeannie from Kaspersky signing off here. Thank you very much, and I hope you have a successful rest of your time in IGF. Thank you. You You You You You You

Amal El Fallah Seghrouchini

Speech speed

128 words per minute

Speech length

1741 words

Speech time

817 secs

Anastasiya Kozakova

Speech speed

170 words per minute

Speech length

3082 words

Speech time

1087 secs

Audience

Speech speed

167 words per minute

Speech length

130 words

Speech time

47 secs

Dennis Kenji Kipker

Speech speed

157 words per minute

Speech length

1961 words

Speech time

747 secs

Jochen Michels

Speech speed

123 words per minute

Speech length

82 words

Speech time

40 secs

Martin Boteman

Speech speed

158 words per minute

Speech length

389 words

Speech time

148 secs

Moderator

Speech speed

162 words per minute

Speech length

3553 words

Speech time

1313 secs

Noushin Shabab

Speech speed

125 words per minute

Speech length

1371 words

Speech time

658 secs

DCNN (Un)Fair Share and Zero Rating: Who Pays for the Internet? | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The discussion surrounding Europe’s influence on Latin America’s policy decisions is of great interest. While the sentiment towards this topic remains neutral, it is acknowledged that everything discussed in Europe has a significant impact on Latin America’s policy agenda. This highlights the interconnectedness between the two regions in terms of policy-making.

The development of the interconnection ecosystem has been a notable achievement for the internet technical community. Previously, all the interconnections between ISPs and content providers used to happen in Miami. However, a significant effort has been made to develop a completely new interconnection ecosystem. This development has been positively received and is seen as a step forward in enhancing access for people and supporting industry, innovation, and infrastructure, in line with SDG9.

On the other hand, the adoption of new policies by countries like Brazil can have negative consequences. When a country adopts a particular policy, companies are required to pay and comply with the law, which may result in additional costs. As a result, companies may choose not to bring their caches and peerings into exchange points. This policy change can disrupt the existing system and have an adverse effect on telecommunications companies and content providers. The smaller stakeholders, such as small ISPs, small platforms, and small internet companies, will be particularly affected by such changes. The disruption caused by this policy change is expected to be significant, with results similar to the current scenario.

The European telecom sector is facing several challenges, with a major concern being the cost involved. The sector has experienced a decrease in revenues by 30% since 2011. Furthermore, the returns on investment for the capital employees have been lower than those in the US. This negative trend highlights the need for attention and potential solutions to address the financial health of the sector.

Investment in networks is considered of utmost importance. The focus remains on the quality of networks, along with the need to improve coverage, especially regarding 5G networks. The current adoption rate of 5G in Europe stands at 15%, underscoring the room for growth and the importance of investing in network infrastructure. These investments align with the goals of SDG9, which include industry, innovation, and infrastructure.

Another suggestion put forth during the discussion is the idea of redistributing funds from over-the-top (OTT) platforms to support telecommunications services, particularly in rural areas. This proposal aims to utilize the funds obtained from OTT platforms as a source for a Universal Service Fund, which can be dedicated to strengthening telecommunications services in areas with limited connectivity. This concept resonates with the focus of the SDGs on reducing inequalities (SDG10) and industry, innovation, and infrastructure (SDG9).

In conclusion, the discussion on Europe’s influence on Latin America’s policy decisions provides valuable insights into various aspects of policy-making, interconnection ecosystems, the impact on small stakeholders, challenges faced by the European telecom sector, the importance of investment in networks, and the potential of redistributing funds for rural telecommunications services. While some of these points have positive implications, others highlight concerns and challenges, making it a diverse and multifaceted discussion.

Maarit Palovirta

The telecommunications market in Europe faces limitations in investment for infrastructure due to its unique market structure and intense competition. Compared to the United States and Japan, Europe has a more fragmented market with 38 telecom operators serving over 500,000 customers, creating challenges in securing investment for vital infrastructure like 5G networks.

Additionally, heavy sector-specific regulations and restrictions on mergers hinder the growth of European telecom operators. Pricing regulation further limits their flexibility in pricing services. Limited investment in telecommunication infrastructure impacts service quality, trust, and sustainability, leading to decreased customer satisfaction. Efforts are being made to measure the environmental sustainability of the sector.

Despite these challenges, the European Commission deems the existing open internet principles valid and not in need of revisiting. However, operators in Europe face a one-sided obligation to deliver any traffic regardless of size or form, limiting their ability to manage data traffic.

Investments in private networks are applauded, despite creating regulatory asymmetry. The impact of these investments needs evaluation in relation to access network investment. Addressing the lack of coverage and capacity in some areas requires investment and enhancement.

The European Commission aims to deliver a new regulatory framework to tackle industry challenges and supports open discussions with stakeholders. They advocate for a check-up of the internet ecosystem and regulation framework. In conclusion, the telecommunications market in Europe faces limitations in infrastructure investment due to its unique market structure and competition. Sector-specific regulations and pricing restrictions further hinder operator growth. Limited investment affects service quality, trust, and sustainability. However, the existing open internet principles are deemed valid. Investments in private networks are praised, and efforts are being made to address coverage and capacity issues. The Commission aims to deliver a new regulatory framework and supports open discussions to address challenges in the industry. A comprehensive evaluation of the internet ecosystem and regulation framework is advocated.

Kamila Kloc

The issue of concern over internet fragmentation due to the practices of telecom companies and big tech companies is gaining significant attention. These practices have the potential to create a division between users and services, ultimately leading to increased inequality. The original intention of the internet was to be an open and interconnected environment, but certain practices have disrupted this ideal.

Limited internet access poses a significant drawback, especially for economically disadvantaged individuals. In Brazil, for instance, many people rely on public Wi-Fi or have limited access to home Wi-Fi. Towards the end of the month, when data allocations are nearing their limit, accessing the internet becomes challenging. As a result, individuals are left with restricted access to only a few apps or websites, exacerbating existing inequalities.

Additionally, limited internet access can contribute to the spread of misinformation. When people are unable to verify the information they receive due to restricted access, it becomes easier for unverified or false information to circulate. This situation leads to an increase in disinformation, undermining the goal of an informed and educated society.

The practices of zero rating and fair share also adversely affect consumers, particularly in economically disadvantaged regions. Zero rating is often presented as a way to provide free and unlimited access to specific apps or services. However, in practice, it can restrict individuals’ choices and tie them to specific apps. Furthermore, fair share practices, aimed at increasing revenue for telecom companies, may result in increased prices and reduced service quality. These practices further disadvantage consumers, especially those in economically vulnerable communities.

When discussing open internet access and methods to expand access, it is crucial to prioritize the well-being of consumers. The focus should be on finding solutions that ensure equal access to the internet for all individuals, irrespective of their socioeconomic status. Addressing the distortion of the telecom market, whether through existing or potential practices, is essential to prevent further inequality.

To summarize, the concern over internet fragmentation and limited access resulting from the practices of telecom companies and big tech companies is of growing importance. These practices can lead to a digital divide and increased inequality among users and services. Limited internet access exacerbates this inequality and hampers individuals’ ability to verify information, facilitating the spread of misinformation. The practices of zero rating and fair share also harm consumers, particularly in economically disadvantaged areas. It is crucial to prioritize consumers’ welfare when discussing open internet access and explore equitable approaches to expand access for all.

Artur Coimbra

The internet architecture has significantly changed over the past 15-20 years, with content now being located closer to users. This transformation has led to the emergence of micro data centers, content delivery networks, and caching infrastructures, revolutionizing the way content is delivered. Additionally, there has been a remarkable reduction in data storage costs, with prices decreasing by as much as 98% or 99% during this period. These changes have not only made the service more affordable and efficient but have also resulted in cost savings for IP transit contracts.

While these developments have brought benefits to users and content providers, telcos are facing pressure from large digital platforms to provide content for free. Previously, telcos charged both content providers and users through IP transit contracts. However, due to pressure from big tech platforms, telcos are now compelled to provide content without charge, leading to a shift towards a one-sided market. This transition has placed telcos in a challenging position as they are unable to increase charges for users due to legal restrictions on data caps and other market factors.

A market solution is seen as a positive approach to address the pressure telcos face from big tech platforms. Creating a healthy and sustainable network is an incentive for both telcos and big digital platforms, emphasizing the need for a market-driven solution.

It is important to differentiate whether the pressure telcos experience is a result of bargain power or market power exerted by big tech platforms. If the pressure is due to bargain power, it is considered a norm within the business environment. However, if it is a consequence of market power, then it becomes a structural issue that necessitates intervention from regulators and legislators. This distinction is crucial in determining the appropriate course of action.

In Brazil, regulators are adopting an evidence-based approach to define the problem before seeking a solution. Gathering evidence and understanding the issue better is seen as essential for achieving the objective of increasing funds available for network investment.

When designing the concept of a fair share, careful consideration must be given to ensure sufficient funds are allocated to network investment. If the fair share results in pricing competition among users, the available funds for investment could be depleted. Therefore, striking a balance between fair treatment and maintaining adequate investment funds is vital.

In conclusion, the evolution of internet architecture has brought about positive changes, including cost reduction and improved services. However, telcos now face challenges due to pressure from big tech platforms. Finding a market solution and distinguishing between bargain power and market power will be crucial for maintaining healthy networks. Regulatory intervention may be necessary in cases involving market power. The regulator in Brazil is adopting an evidence-based approach to addressing the issue at hand. Designing a fair share concept that enables investment without depletion of funds is of utmost importance.

KS Park

The standard payroll rule implemented among internet service providers (ISPs) in South Korea has had several negative consequences. This rule has resulted in inflated internet access fees, which have put a financial strain on both ISPs and content providers. Content providers have been required to pay more as their host ISPs send more data to other ISPs. Consequently, South Korea’s transit IP fees have become significantly higher than those in Frankfurt and London, reaching ten times and eight times the respective fees in those cities in 2021. As a result, public interest apps, such as the COVID location announcement system, have been unable to fully function due to the exorbitant internet transit fees.

Furthermore, the presence of paid peering has caused confusion and violated network neutrality. A significant portion of internet traffic goes through paid peering points, which has led to concerns about unfair share violation and the lack of network neutrality. The confusion surrounding whether network share and unfair share violation are a result of paid peering persists.

Regulations from both the Federal Communications Commission (FCC) and the Body of European Regulators for Electronic Communications (BEREC) do not explicitly condemn paid peering, leaving room for uncertainty and complications in enforcing network neutrality.

The concept of mandatory paid peering is also met with negative sentiment. Implementing mandatory paid peering would likely lead major companies such as Google and Netflix to disconnect from the network rather than pay access fees. If content providers burdened with peering fees disconnect, regulators would have limited options without fundamentally altering the nature of the internet.

Despite these issues, the principles of freedom to connect and not charging for data delivery remain positive aspects of the internet. These principles are considered the foundation of the global product of the internet and enable users to connect freely without being burdened by data delivery charges.

On a positive note, despite a five-fold increase in data traffic, the cost of network maintenance and development has remained constant over the past five years due to technological advancements. This demonstrates the efficiency and progress made in maintaining networks and supporting the growing demand for data.

Turning to the topic of 5G, Korean telecoms have faced challenges in delivering good connectivity despite forcing consumers to purchase 5G phones. This has resulted in consumer dissatisfaction and the filing of class-action lawsuits against telecoms. Additionally, the government in Korea has taken away the 5G bandwidth license from certain telecoms, further complicating the situation.

European telcos, on the other hand, have managed to maintain profits despite falling revenues, thanks to the decreasing cost per unit of data. They have been able to offset the declining revenues by reducing the cost of data delivery.

However, it’s important to note that declining profits of telcos do not guarantee the maintenance of privacy. The Korean case, for example, indicates that despite falling costs and sustained profits, privacy has not been adequately protected.

In conclusion, the standard payroll rule among ISPs in South Korea has had negative effects on both ISPs and content providers, causing financial strain. The presence of paid peering has raised concerns about unfair share violation and violated network neutrality. Despite these challenges, the principles of freedom to connect and not charge for data delivery are key pillars of the internet. Technological advancements have enabled the cost of network maintenance and development to remain constant despite increased data traffic. Challenges with 5G connectivity and lawsuits have arisen in Korean telecoms, while European telcos have maintained profits through reduced data delivery costs. However, the declining profits of telcos do not guarantee the protection of privacy.

Konstantinos Komaitis

Applying old telecoms rules to the internet is widely regarded as detrimental, as it would result in unanticipated barriers to entry. This approach is seen as nonsensical, considering that telecoms rules operate under the pretext of the Internet Governance Forum. The argument against these rules is based on the belief that they would hinder competition and impede innovation in unpredictable ways.

It is also argued that the internet infrastructure is not solely dependent on telecom operators. A diverse range of actors, including technology companies, contribute significantly to the development and maintenance of the internet ecosystem. Content and application providers play a vital role in supporting internet infrastructure, as exemplified by their contributions through CDNs, data centers, and cloud services. Therefore, portraying only telecom operators as the sole contributors to internet infrastructure is inaccurate.

The issue at hand also revolves around network neutrality, and concerns have been raised regarding the potential discrimination against certain applications and counterproviders. These cases highlight the violation of network neutrality principles, not only from a technological standpoint but also in terms of economic fairness.

The debate around Universal Service Funds (USFs) has garnered criticism from various perspectives. Telefonica, for instance, suggests that Europe should not replicate the USA’s approach to USFs and instead advocates for direct payments as a more suitable solution. Additionally, Komaitis questions whether telecom companies genuinely desire a discussion centered on USFs, suggesting a misalignment of interests.

Criticism is also directed towards Europe’s telecom model, which is deemed as setting a poor example. Komaitis specifically points out flaws in Europe’s approach and highlights the need for a more effective model.

Notably, over 20 organizations globally, including Brazil, India, Europe, and the United States, express similar concerns about the infrastructure issue, indicating a widespread and significant global concern. This highlights the need for a global dialogue and deliberation on infrastructure, led by civil society organizations.

Komaitis stands firmly against the current method of discussing infrastructure and believes that it needs fundamental changes. He argues that the current conversation around infrastructure is primarily driven by telecom operators, neglecting the perspectives and interests of other stakeholders.

In conclusion, applying outdated telecoms rules to the internet is widely seen as detrimental and likely to create unforeseen barriers. The internet ecosystem relies on diverse actors, including technology companies, and portraying only telecom operators as contributors to infrastructure is misleading. The issue at hand encompasses concerns over network neutrality, technological and economic discrimination, universal service funds, Europe’s telecom model, and the need for a more inclusive and global discussion on infrastructure. Komaitis takes a stance against the current infrastructure dialogue and calls for a change in approach.

Thomas Lohninger

In the discussion surrounding the telecom industry, several key points emerge. Firstly, the practice of zero-rating, which allows users to access certain online content without incurring data charges, is prevalent in many nations. This practice controls how users experience the internet by incentivising them to use certain services for free.

Concerns have also been raised about the shift in the telecom industry towards prioritising profit over quality. Some argue that this focus on profit optimisation may lead to a deterioration in the overall user experience. Critics suggest that this approach could result in the elimination of local caching services, potentially increasing costs for consumers.

The concept of net neutrality is also a contentious issue. It is argued that network fees are inherently incompatible with the principles of net neutrality. Those who support net neutrality argue that all users should have equal access to the internet, without any discrimination or preferential treatment based on payment.

Opponents of a proposition that violates net neutrality predict that it would be harmful to society and the internet ecosystem as a whole. They argue that such a proposition would violate the principle of net neutrality and would primarily serve the profit margins of telecom companies. Instead, they suggest that the concerns and needs of society should be the deciding factor, rather than simply focusing on telecom companies’ profits.

Commissioner Thierry Breton has faced criticism for not upholding due diligence standards. His previous role as CEO of France Telecom has led to accusations that he broke his promise in the European Parliament. In response, some countries, such as Germany and the Netherlands, have issued letters to the European Commission, urging it to uphold due diligence standards.

Furthermore, when it comes to network investment, there is evidence suggesting that simply investing more money in improving the network infrastructure may not necessarily result in better quality for users. This challenges the notion that money is the main bottleneck in network rollout.

The influence of corporate interests on the decision-making process within the European Commission is also a point of concern. The appointment of the former CEO of France Telecom to the commission is seen by critics as an example of corporate capture. This has led to the promotion of potentially damaging ideas that have been rejected by stakeholders other than telecom companies.

Additionally, the creation of a major telecommunication oligopoly in Europe is viewed by some as an unfavorable outcome. Instead, it is argued that a more desirable model for the telecom industry would involve competition and cooperation among multiple players, rather than domination by a few.

There are also diverging opinions regarding the nature of telecommunications. Some argue that it should be treated as a public utility, prioritising public access and welfare. On the other hand, there are those who disapprove of market deregulation in the industry, likely due to concerns about inequality and the integrity of the market.

In conclusion, the telecom industry has sparked various debates and concerns. The practice of zero-rating, the shift towards profit optimisation, net neutrality, corporate influence, network investment, market deregulation, and the nature of telecommunications as either public utilities or market-driven entities are all key topics of contention. Clear arguments have been presented from different perspectives, each supported by specific evidence and rationales. The discussions highlight the complex challenges faced by the telecom industry and the importance of carefully considering the potential consequences of various policy decisions.

Jean Jaques Sahel

The analysis of the speakers’ views on the internet ecosystem and its impact on consumers, innovation, and infrastructure provides valuable insights. One of the key points emphasised by the speakers is the need to enhance the open internet to drive innovation and foster digital transformation. They argue that strong emphasis should be placed on preserving the open nature of the internet, as it has been a game-changer in providing access to information for people globally. They also highlight how the internet has become an essential tool for everyday life and the economy as a whole.

Efforts to improve internet connectivity should not only focus on urban areas but also on reaching the last 5-10% of the population in hard-to-reach areas. The aim is to bridge the digital divide and ensure that everyone can benefit from the opportunities offered by the internet. In this regard, it is important to facilitate the easier deployment of internet infrastructure, making it more accessible to remote communities.

The analysis also recognises the significant contributions made by content and application providers in the internet ecosystem. These providers play a crucial role in driving innovation and creating products that attract customers. Additionally, they fund infrastructure such as subsea cables, which help transport traffic more efficiently and save costs for internet service providers (ISPs). The speakers argue that content and application providers should be acknowledged for their massive contributions and the positive impact they have on the network infrastructure.

Regulatory frameworks and market evolution were also discussed as important factors in shaping the internet landscape. The speakers suggest that improvements can be made to regulatory frameworks, both in Europe and worldwide, to accommodate new technologies and seize emerging opportunities. They highlight the need for a forward-thinking approach that embraces the positive aspects of the evolving market.

Stakeholder inclusion was another aspect that was emphasised. The speakers argue that all stakeholders, including consumer organisations, civil society organisations, industry, academics, and the technical community, should be invited to speak at internet governance events. This inclusive approach ensures a well-rounded and diverse perspective in decision-making processes.

Evidence-based decision-making was also highlighted as a crucial factor in internet governance. The speakers emphasised the importance of utilising expert analysis from organisations such as BEREC, telecom regulators, OECD, and the German Motor Police Commission, among others. This approach promotes informed decision-making that considers the implications and potential challenges related to internet governance.

In conclusion, the analysis highlights the need to enhance the open internet, extend connectivity to remote areas, recognise the contributions of content and application providers, improve regulatory frameworks, embrace market evolution, foster stakeholder inclusion, and prioritise evidence-based decision-making. These actions will ultimately contribute to a more accessible, innovative, and inclusive internet ecosystem.

Luca Belli

The analysis examines three perspectives on zero rating and the increase in internet traffic. The first perspective asserts that zero rating is less common in the global north, but prevalent in the global south. In the global south, large platforms have been subsidised through zero rating for the past 10 years, resulting in these platforms generating most of the internet traffic. This prevalence of zero rating has created a new kind of poverty known as “data poverty,” whereby users quickly exhaust their data allowances, similar to running out of money. This perspective presents a negative sentiment towards zero rating and its impact on internet access and digital rights, thereby emphasising the need for fair share.

The second perspective critically examines operators who claim to promote fair share. It argues that these operators are responsible for implementing business models that have led to the exponential increase in internet traffic. Therefore, their assertion of fair share appears self-serving and contradictory to their own actions. This viewpoint highlights the negative consequences of these business models and expresses a critical sentiment towards operators’ claims of fair share.

The third perspective focuses on the shift in telecom operators’ perspectives on increasing internet traffic. It points out that, until the pandemic, telecom operators, especially in countries like Germany, encouraged high video consumption through schemes like BingeOn. However, it is now intriguing that these very operators consider the increase in traffic problematic. This observation indicates a change in their perception and raises questions about their motivations and inconsistencies in their approach.

Overall, the analysis emphasises the negative impact of zero rating on internet access and digital rights, highlighting disparities between the global north and south. It also critiques operators for claiming fair share while implementing business models that contribute to the surge in traffic. The shifting perspectives of telecom operators further highlight the need to scrutinise their motives and actions. These insights underscore the importance of addressing the issue of zero rating, promoting responsible consumption and production, and reducing inequalities in global internet access.

Session transcript

Luca Belli:
is a member of the Brazilian Consumers Pacific Information Policy Lead and Global Telecoms Policy Lead at Google. Then we will have K.S. Park, who is Professor of Regulatory Affairs at Aetno. Then we’ll have Thomas Loninger, who is Executive Director at Epicenter Works. And last, but of course not least, Konstantinos Komaitis, that is non-resident fellow at why we are here and what is the aim of today’s session. We want to discuss the emerging tensions on the fair share or unfair share, as K.S. was reminding us, debate, and also which kind of connection exists with the previous debates that we have been discussing over the past years, especially zero rating debate, net neutrality debate. Over the past year and a half, especially, we have been witnessing that the proposed solution may not be so effective. So the reason why we have today such diverse panel is precisely to try to understand what are the different standpoints in this debate and try, ideally, to come to some common ground and maybe even policy suggestion for the future. Now, without further ado, I would like to give the floor to our first speaker, Artur Coimbra, from the ANATEL, the Brazilian Telecoms Regulator. Artur, you have been working a lot on telecoms over the past. I don’t want to reveal your age, so I’ll just say that you have a certain experience in this. So please, Artur, the floor is yours. Thank you, Luca.

Artur Coimbra:
Good morning, everyone. I’m here representing ANATEL, the Brazilian regulator. And let me just start by saying, to disappoint you, and say that as the so-called fair share or unfair share, I mean, network fee, is an intended solution to a supposed problem that we’re still assessing, I’m not here ready to say whether it’s fair or unfair. But I just want to provoke the debate with some elements. ANATEL, for example, has just disclosed this night the results, the mapping results of arguments of its call for subsidies that it made three months ago. And we had 627 individual contributions on this topic that were mapped and disclosed a few hours ago. And now we’re going to dive into the arguments and provide an outcome of all the contributions that we received. Let me just start by saying that internet architecture has changed a lot in the last 15 or 20 years. So in the golden age of internet, we had the users, we had the content, which was present in big data centers, expensive infrastructure, located in a few places in the world. And between them, connecting users to the content, they were just the network itself through IP transit contracts, bringing content to the user. So the point is that in the last 15 years, data storage costs reduced by 98% or 99%. So it was a great revolution on micro data centers, content delivery networks, caching, and other kinds of infrastructure that brought the content near the user and changed the landscape of internet architecture. So today, you not necessarily need a full set IP transit contract to bring the content to the user. And instead, many, many providers are getting the content just across the street in a micro data center and bringing it to do. It’s better for everyone because it’s cheaper. The service is better. You save money on IP transit contracts. So theoretically, everybody wins. But alongside that phenomenon, we’ve seen the growth of some gatekeepers that are gaining more and more economic relevance and adding value to the network, in fact. And people want to access internet to get to that specific content. That brings a lot of negotiating power over big digital platforms. So when there’s a CDN and the content’s near the user, and the operator has to deploy its own network to get that content without receiving nothing for the traffic, instead of using an IP transit contract by which the operator used to receive the payment from the content provider and, in the end, by the user, well, there was a two-sided market that has become a one-sided market. So that’s what it is, the plea that telcos say. And there’s another issue. The other issue is that, in many cases, telcos plea that they cannot charge the user more for the consumption of a huge amount of data due to legal restrictions on data cap or to other market factors. So in the end, the argument is that you had a two-sided market by which you charged the content provider with the IP transit contract. And then you also charged the user. Of course, he’s the user. And this two-sided market is becoming a no-sided market. So this pressure, which could be a competition pressure, it’s certainly a negotiation pressure. So this pressure raises the question, which is necessary to be answered before we decide if the fair share is fair or not. And the question is, is this pressure that big tech, big platforms are putting on telcos, is this pressure the result of a bargain power or of a market power? This is the main issue. If it’s bargain power, it’s part of the game. Just go ahead. If it’s market power, then there’s a structural issue that must be tackled by regulators and by legislators and so on. So this is the main question that should be answered before we decide what to do with this. Anatel is working on it. But my final line here, and I think that the great takeaway of this discussion is that they’re both on the same side of the boat. Telcos and big digital platforms, they both depend on healthy, sustainable networks. Otherwise, it’s bad for everyone. So that is an incentive for them to try and find a market solution, which would be great for everyone. And I really trust on that happening. And we hope that. We just hope for the best.

Luca Belli:
OK, so thank you. Thank you very much, Artur, for these initial points. Very well explained. And now let’s stay in Brazil, but from a consumer perspective to understand a little bit more the complete picture of this evolving discussion. Please, Camila, the floor is yours.

Kamila Kloc:
Thank you so much, Luca, for the opportunity, and also for you that are here in this time of the morning. In Brazil, we have this context of this public consultation made by Anatel. But we have also a bigger context in Brazil in discussions about the neutrality, zero rating. And it involves not only the telecom authority in Brazil, but other several authorities. For example, zero rating practices were already analyzed by the competition authority in the past. And now the Ministry of Justice has been pressured also to analyze these practices in terms of consumer law. And this pressure was made by civil society. So to present myself, I am a specialist in digital rights and telecommunications in the Brazilian Institute of Consumer Protection. And we are part of a network on digital rights, including access to internet. And we are trying to raise these issues from more of a consumer perspective. And I think that this is the biggest challenge on that. Because when we are talking about fair share or unfair share, we are talking about big companies. We are talking about big techs versus big telcos, the new companies, like the traditional companies. And the ones that suffer the most are consumers in the end. So to answer directly the question, yes, for most part of the Brazilian society, definitely for EDEC and from the Coalition Networks in Brazil, it is unfair share if we have to fee companies to these kind of services. We are very concerned of internet fragmentation. And we are very aware that the motive, the reason that the internet is successful is that internet was created to be an open environment, an internected environment. And once we do practices like this, we separate. We created more of second and first class users, first and second class services in the internet. Since the commissioner have already talked a little about the consultation in Brazil, I would like then to focus on how the unfair share connects to zero rating. And for this, in the last few years, we’ve been working so hard to present not only these critics and these arguments against this kind of practices, but also to bring data on that. In Brazil, we have a strong organization that makes internet connectivity research every year, which is CETIC, which is related to the Internet Brazilian Steering Committee. But we also have been developing a research in EDEC to bring this data. And nowadays, we are also developing research with Anatel. But to focus on this research that EDEC conducted in 2021, pandemic time, sorry, if I’m not mistaken, one of these two years. But we’ve been interviewing the Brazilian poorer classes to understand how do they use internet and how these kinds of limitations, for example, the zero rating ones, affect their lives. So we found out that people with lower classes may have mobile phone internet just for 21 days a month. So 21 days of 30 days. What this means? This means that in the last days of the month, people are depending on Wi-Fi, public Wi-Fi, or home Wi-Fi, which not everyone have. And people rely more on mobile phone access. And this kind of access is limited. It’s not based on speed. It’s based on data franchise. And once the internet is over, you have some kind of access. But which kind of access do you have? You have access just to some limited apps, some limited companies, which in Brazil are the big techs in the end, especially meta. This brings some issues that I know that I’m expanding a little the scope of this lookup. But it’s important to say that when we are talking about telecom, when we are talking about access, we are talking also about other rights. So disinformation is also a huge issue that is a consequence on all of that. Because once people do not have access to confirm the information, they receive some information, for example, in WhatsApp or Facebook, and they share it without confirming it. So we have to talk about these issues, not only talking about net neutrality, not only talking about internet access, which would be in the center, but to talk how this affects several other rights. So thank you so much, Luca.

Luca Belli:
Thank you very much, Camila. So to start stressing this connection between the fair share and zero rating, and actually to add a little bit of element of complexity, I think that maybe for people in the global north, if we can say so, where zero rating is not that common, this does not sound evident. But in global south country, observers are a little bit puzzled when they hear this debate about platforms contributing more to network fees, because platforms have been subsidized, basically, with zero rating for the past 10 years. And I think we have been speaking about this issue several times over the past years, the fact that zero rating actually would have created this kind of situation where large platforms are responsible for most of the traffic, precisely because they have been subsidized, sponsored for free with zero rating in most global south countries. So it’s interesting to see that nowadays, after having offered this traffic, and as Camila was saying, in some parts, some weeks, the last week, is a new kind of poverty in many countries. At the end, you finish your data allowance, like you finish your money. And then as you don’t have more money to go to the supermarket, you only have data allowance to have social media, basically. But that is not something that has happened, because the internet is like that. It’s a result of a specific business model. And many of us have questioned that business model. But now it’s surprising to see that, on the other hand, we see operators that may claim fair share when this, if you want to say, large use of the network is the precise result of this business model that has been enacted over the past 10 years precisely by operators. To understand a little bit more of the complexity, let me give the floor to Jean-Jacques, that has been also dealing with these issues for, I don’t want to reveal again your age, but for a lot of time. Thank you. Thank you, Luca.

Jean Jaques Sahel:
Good morning. I mean, I’ve been dealing with the open internet since I first tried the internet in 1993, because it’s always been open, although it was very, very slow, I have to admit, in 1993. Certainly from where I connected at the time in Marseille, it was like a 56k modem connection linking three universities. That was the only one connection. Now we have 12 subsea cables arriving in Marseille. We have fantastic connectivity. Although I’ve been actually hearing people complain that we have too much, too many cables arriving at the city. So we need to know what we want anyway. But that’s kind of part of the discussion in a way. I think generally speaking, it’s great to be at the IGF and in forums like this, where among stakeholders across different parts of the ecosystem, we can look at the concerns that there are. And we can try to look at the evidence, share ideas for what could be improved. And I think what we all care about here is, at the end of the day, how can we get a good internet tool? An internet that’s affordable, that’s got good speed, good capacity. And why do we want that? Because, well, it’s enjoyable for people, hopefully, but it also can support digital transformation. It can help our societies. It can help our economies. It can help us as people every day. That’s really the end goal. So connectivity as a means to support wider benefits to the economy and society. And that’s the end goal. And so when we get to this debate of network usage fees, as they are referred to in some places, I actually think it’s a false debate. And it was a false debate when it was first mentioned in something like 1999. You can see the quotes. They were pretty much the same as what’s been said by some people, actually sometimes the same people then. It was a fake debate 10 years ago. Not to say we shouldn’t have the debate, but I think we need to move on. I think the reality is that, and I think most of us in the room know this, we are an ecosystem in the internet, starting with users and encompassing essential elements in this virtuous cycle that we have. And these essential elements are the telecom operators, network operators, and the content and application providers. There is a virtuous dynamic where there is innovation, there are content and services created that appeal to customers or to consumers, users, and then they subscribe to the internet. And I think that’s been working well for 40 years. It’s still working well today. That’s what the open internet is about. It’s innovation without permission that boosts this whole ecosystem and supports the benefits that the internet has been bringing us. And when we think about this from a business perspective, when we, as a company, look at ISPs, telecom operators, we see them as essential partners, both indirect partners in the sense that we create content and applications for our users. They provide that connectivity that users want to access our content services. So there’s a nice indirect element, indirect dynamic. And then we have direct relationships. We partner with telecom operators on a huge amount of things. And it’s been like that for many years. We do it for commercial reasons. They might resell our cloud or add bundles of YouTube premium, for instance, perhaps in their own subscription packages. But we also do some infrastructure work with them. So for instance, increasingly, we help them with storing some of the network aspects, elements in our cloud. They use some of our data analytics to optimize their networks and make cost savings. Or indeed, we look at much more innovative things. For instance, we have a joint 5G research center with one of the large operators in Europe. So we have all sorts of direct partnerships as well. It’s a very dynamic and generally very fun way of doing things. and fruitful environment and I think that’s the sort of thing that we’d like to focus on, this virtuous ecosystem. So going back very briefly on network fees because there’s been some great points made, I think it’s been shown already, the introduction of these fees would be very unhelpful to consumers, to competition and also to the technical workers of the Internet, to the efficient routing of Internet traffic. Many stakeholders have said that, including a number of governments, telecom and competition regulators and efforts. We need to focus instead on real problems, thinking about how we can get the open Internet to really favor innovation and encourage users to use it, to enjoy the Internet and support digital transformations. I think we need to join forces together and look at the genuine issues. Think about things like how do we reach the last five or ten percent of the population that live in difficult to reach areas? How can we use a mix of interesting new technologies perhaps to reach them as an example? How can we continue to facilitate making it easier to deploy infrastructure like to lay fiber or to lend submarine cables or to make more spectrum available for instance on an unlicensed basis so that we can facilitate things like Wi-Fi offload which take the strain off the networks. How can we bring basically resiliency and diversity to connectivity and support the use of the open Internet for the good of society, the economy and us as users people. Thank you.

KS Park:
Excellent. Thank you very much Jean-Jacques. And actually now I would like to give the floor to KS Park because South Korea is actually what the only example of a country where this kind of fees have been introduced and so it would be interesting to understand what is the result of the concrete result in the Internet ecosystem of this of the implementation of this model. So standard payroll obviously works like tax on Internet to be exact taxation on speaking online because to speak online you have to push out data onto the network so the more you speak the more you have to pay somebody. Just like in the days of you know snail mail and telephony to send a letter out you have to buy a stamp to make a phone call you have to pay the telecom company. So this has made this created now the rule was instituted only among ISPs and using that as an excuse the policymakers did not consult with the consumers and content providers. So the standard payroll applies only among ISPs but the economic impact of that of course trickles down to content providers because ISPs hosting popular content providers will end up pushing out more data to other ISPs because users on other ISPs will want to access that popular content and you know accessing the content means the data files made up of HTML will have to be pushed out to the users. So that creates this incentives across the board among ISPs to host popular content or any content and that has increased you know that basically removed the competition among ISPs in selling their services to in hosting good content on their network and that has increased the internet access fees. Well I mean it didn’t really increase what happened was up you know that because of the technical advancement that Arturo talked about the internet access fees are falling by 20% every year but Korea it didn’t happen so that continued for about the rule was instituted like 2016 and now it’s almost seven years now over seven years the what actually immediately even 2017 the internet access fees or in technical terms is transit IP transit fees in South Korea was clocked at what was measured to be eight times that of Paris five six times to Los Angeles in New York and the trend worsened in 2021 the internet access fees or IP transit fees became ten times Frankfurt eight times London. You can see the the financial a very hostile financial environment that startups have to domestic startups I say domestic because the domestic ones have to buy internet access from local telcos and like 2021 Korea’s answer to Netflix called the watcher video streaming service was paying 10% of its revenue as internet access fees. In 2020 public interest app like COVID location announcement system it’s also app the the operator complained that because of high internet transit fees they cannot fully function they cannot meet all the demand so that’s what’s happening with the domestic content and overseas content providers they also have a problem because because of this you know this this incentive this the this incentives among ISPs from hosting popular content applies both overseas content when they are on the cache servers I mean I should have talked about how data storage has become cheap and then now the content is coming across the street so a lot of content is being served through cache servers on the network of Korea’s ISPs but even hosted a cache server has become you know unpopular among ISPs because they have to bear the center pay burden so they are charging increasing the you know what is technically paid peering fees that they have charged the overseas content so twitch which is a popular gaming platform they could not continue making the payment so you know they could do two things they could charge the uploaders from Korea for uploading right because it is because of them or they could charge the contents that are popular among Korean eyeballs right it is those contents that are generating more payment burden on twitch they couldn’t do that because I mean you know making people pay is a very unfair so what they did was they intentionally degraded that service they lower the resolution to to 750 so only in Korea you know users are watching at a lower resolution than other parts of the country so a lot of users are leaving a lot of gamers are also leaving because you know Korean gamers video will be watched more by Korean eyeballs and that is and if Korean eyeballs are you know getting lower resolution Korean gamers will leave and this I mean we can we can extrapolate that to like Netflix I mean you know squid game is popular Korean titles they are yes world popular but they are yes the word popular but they are also initially popular with the Koreans too right it’s it’s a Korean eyeball heavy contents and if Netflix is required to pay Korean ISPs for accessing Korean consumers they’ll have to reduce investment in Korean heavy Korean eyeball heavy titles so that is the situation in Korea I hope you guys don’t learn this lesson or learn the lesson either way and I hope I have a few minutes later to talk about some of the general yeah thank you very much KS for this insight and actually I think I propose we finish with the all

Luca Belli:
the presentation so that we then have a good moment for discussion because I’m sure there will be a lot of remarks comments and occasions to discuss more this let’s move now to Europe that is now the center has been the center of the attention of the past year at least for a consultation from the European

Maarit Palovirta:
Commission so Marit Etno has been one of the main proponents of this fees so please marry the floor is yours thank you look and thank you to the organizers of the panel it’s very nice actually to hear global views as you know we’ve been discussing this in Europe a lot mainly amongst ourselves but it’s really really I think fruitful to also have this global exchange and just to go to the title of the workshop so many of you know that zero rating is is no longer a reality in Europe so I will focus my comments on the on the fair contribution as such and perhaps for the sake of well the audience today who maybe you haven’t followed all the discussions in Europe I will start with a few few thoughts on on how do we see the telecoms market at the moment in Europe and I already heard some of the keywords from from the different interventions namely to do with the kind of market structure and competition and the different dynamics that are related regarding consumers of course and and society as a whole so I’ll try and give you a bit of a background here as a kind of a starting point so indeed we do have been now for nearly two years advocating and have tried to bring some well kind of describe the context around this fair contribution issue from our perspective and the markets in Europe as we see today is and many of you puppies saw that the European Commission published the summary results of the public consultation just two days ago so I’ll use one of the quotes that they put into that which came from the ethno GSM a reply and is with regards to the competition so in Europe when we look at the number of operators in the EU markets the number of operators that are serving more than 500,000 customers so more than half a million customers in Europe is 38 in the US that is 7 in Japan that is 4 so the market structure in the EU is significantly different that it is in many other parts of the world that could be comparable to Europe that means that as we are in an industry of heavy capex investment so digging fibers into the ground building towers for 5g etc that requires a lot of money a lot of effort by by many people and it means that simply this current market structure does not allow for proper investment into these infrastructures that actually our society wants our consumers want that’s our policy makers and politicians want that and also we want that but the current market structure simply doesn’t allow for that the return on investment doesn’t allow for the investment. We have some regulatory specific circumstances in Europe so we have competition policy that restricts mergers so telecom operators are not encouraged or allowed really to merge with each other for the moment just to simplify things we have still some heavy sector specific regulation for the telecom sector for example on pricing so we have pricing regulation you have all heard maybe about the roaming rules in Europe wholesale prices they are all fixed prices so flexibility to price services is limited and just to give you an idea that also the competition in the market due to the fact that well there is this kind of sticky prices situation but also the fact that there are so many players is that despite the very heavy inflation that we also I think in our country’s last year the telecom sector in Europe was the only industrial sector where the price growth was negative so the prices went down despite the inflation and this is because of the heavy competition pressure on the industry and the pricing elements that I just described so there are some real pain points in the European telecom market that may not necessarily exist in other parts of the world. Now when we look at the consumer impact of course the main point of operators is to provide good services for the consumer and affordability certainly is a key issue and I think I just provided some elements why Europe has the most affordable some of the most affordable internet services in the world but also other important factors are things like quality so if we don’t invest sufficiently quality eventually will suffer trust if we don’t invest you know and update security and make sure that we are you know bringing the new layers into the networks also this factor will suffer and this will at the end we believe will start well making our consumers unhappy as well so it’s not only a question about prices in Europe it’s about these other factors so we are really looking at this from this kind of more holistic perspective and maybe a third factor I would like to raise because we talk a lot about societal welfare at the moment so you know security is certainly one thing but sustainability environmental sustainability is also important so we are now developing different kinds of KPIs in Europe to try and make sure that all industry players including operators and networks are as sustainable as possible and that of course means that we do not only measure but we also again invest in networks to make sure that in always possible we try and make them as sustainable as possible so that’s a little bit where we come from in Europe and so you know I’ll kind of we see that our hands are at the moment a little bit tight and I would like to maybe touch on the net neutrality point as well you know very well that Europe is is one of the very few well countries or regions in the world where we have net neutrality open internet regulation and you may also know that the European Commission evaluated this regulation I think it was earlier this year and asked many stakeholders including us if we should reopen this regulation if we indeed reconsider it perhaps and as the Commission has also many times said publicly that no stakeholder came forward asking for reopening of the regulation and indeed so etno together with again the GSMA so representing really 80-90 percent of the European telecommunication markets said we are happy with the open internet principles the regulation the text okay you may argue if it’s you know the best as we kind of look at the developments in our industry today but it is not worth opening it because we still believe that the principles are valid and this of course from our perspective also then I’m going to enter a little bit to the market asymmetry that we see related to the contribution it means that we as operators have a must carry obligation just again to simplify things so operators will carry any traffic how big how small in whatever shape or form coming from whoever to the end user today in Europe or if they don’t then they of course risk going to the court which you know some operators might want to do but in general I don’t think this is the case and so there is a one-sided obligation to deliver traffic and again this then gives us very limited possibilities to manage and try and optimize the data traffic and this is as and as we know that this is something that has again gone up quite a lot so we are in a situation we have where we have pressure on investment but we also have them pressure on on this kind of increasing digitization of our society which of course we all welcome but we need to make sure that you know we have a balance between the networks that are supposed to deliver this and our key part of our internet ecosystem and then this kind of services and content part so maybe I’ll just stop there and happy to then chip in later. Thank you very much Mariette for providing your time.

Luca Belli:
I would like to thank you for the opportunity to be here, and I would like to thank you, Luca, for providing us a very good overview of the telecom market in Europe, and speaking about the consultation, I know that also Thomas has been very active into participating in this, and might have also a different perspective on it, so please, Thomas, the floor is yours.

Thomas Lohninger:
Thank you very much, and I would like to thank you, Luca, for the opportunity to be here, and representing the telecom industry in this round, it might not be easy, and I think it’s always a pleasure if we get to have this exchange, and these honest exchanges are necessary for moving the debate forward. Martin and I had a very similar debate, actually, last IGF, in Ethiopia, and I’m a little bit saddened that we always need to have these debates in other corners of the world, and while I’m not a big fan of the idea of zero rating, and I don’t know if that’s happening in Brussels, it’s very rarely that voices from consumer protection or civil society are present there, so maybe that’s a good reason for having the IGF, I think. Anyway, I would also like to touch on the issue of zero rating and network fees, because, yes, we have no more zero rating in Europe, but still, those two things are very much connected. Zero rating by the IGF is a very, very common practice in many nations, and here you have to hit theero rates as the free market is in control how the user experiences the internet. If you look at the political statements, zero rating was a very common practice happening in all but one member states and it is, as Lucas described, incentivising the user to use the internet for free. So the free market is now becoming the problem, it is now cluttering up the pipes. And I want to also maybe focus in a little bit on why there is such a drastic shift that we are discussing here. Internet connection used to be something altruistic, it is something where nerds put cables together in order to make the internet whole, to allow global end-to-end connectivity, and that’s usually done and then the user is able to use the internet for free. And the free market is now becoming a place where we are optimising quality, and with this proposal we will drastically step away from that. We will optimise for profit, we will maybe no longer have local caching service. We would need, like we see in South Korea now, make a far longer travel in order to get to the data that we want. And it would become more expensive, ultimately, either the prices that we as consumers have to pay or the quality we have to pay for it. And I think this is a very important point, because maybe that’s not clear for everyone, why net neutrality is inherently incompatible with network fees. Everybody agrees that even in the oldest version of net neutrality, you cannot have paid fast lanes. You cannot have one railway for everyone, and then a faster railway where you have to pay in order to be on it. And in fact, you only get good quality with Deutsche Telekom, a big German provider right now, if you pay them. You will suffer every night, every peak hour, with your service, if you don’t have a paid connection into their network. And, yes, technically speaking, they are not slowing down the traffic within their network. They’re just ensuring that the entrance to their network is a bottleneck that’s always congested. And funny enough, their prices to have these interconnections are 10, 20, 30 times more expensive than everyone else’s. It is very important to make a pause here. We are now one and a half years into this debate, into the 2020 iteration of it. For the first time, we have a public record from the consultations in Brazil and in Europe on what happens. We have a public record from the consultations in Brazil and in Europe on what happens. And we have a public record from the consultations in Europe on what everybody has said. I only had time to look at the European one, but I think it’s interesting to just list the people who have contributed, the organizations and what they’ve said. We have the conglomerate, the body of all Telekom regulators that say this violates net neutrality and is dangerous for the Internet ecosystem. We have the private organizations who have the logic and provide consensus. We have multiple organizations that went on to their statement. And it’s not just the regulators, it’s the public broad casters, it private broad casters, the journalistic associations. We have the ITF community from the ITF downwards. We have the ITF from the ITF upwards. We have the ITF from the ITF downwards. And, of course, the copyright industry. Disney is on the same side as Google and as consumer protection organizations. So this is a coalition of unlikely allies. And to conclude, maybe, I think actually we don’t have a problem with the market structure in Europe. We have a problem with the market structure in the EU. So, I think, it’s important to say that, yes, telecoms are making profits. Just not big enough profits, but if whole society is complaining and saying, stop, this will hurt us, maybe the profit margins from telecom companies shouldn’t be the deciding factor if everybody else gets hurt by such a proposal. And, lastly, I also want to say that now almost five months ago, the European Commission on Telecommunications, the EPC, announced that it would be giving up on this proposal, funny enough, on the same day when it was announced, that Commissioner Breton is giving up on this proposal for this legislative term. To remind you, Commissioner Thierry Breton used to be CEO of France Telecom. And as a European, I have to say I’m shamed by the way he has broken every word in the European Parliament, but I am shamed by the way he has broken every word in the European Parliament. So, please, Germany and the Netherlands issued letters to the Commission saying exactly the same. Please uphold due diligence standards, and at least when Europe influences a worldwide debate like in India and Brazil where everyone is referencing Europe, we should set ourselves to a higher standard. Thank you.

Luca Belli:
Thank you very much. I think it’s a very good point, and I think it’s also on one of the points that you mentioned about the very large and diverse spectrum of stakeholders that participated to this consultation, raising some concern with the effectiveness of this proposal. And I think that, honestly, if we had the same consultation with the European Commission, we would have had a much better result. And I think it’s a very good point, and I agree that it is, with all due respect for Google, of course, there is a need, maybe, to have a better regime, a more effective regime of taxation, but I think the fundamental question here is that maybe the goal, the way forward is not really to tax the traffic that is injected in the network because consumer demanded, but they also demanded that the network be redistributed in a more effective way so that then the benefits can be redistributed socially, right? So I think that we, virtually everyone would agree that a fair share, some kind of fair share is not a bad idea, but maybe this type of fair share is not really the solution that, for the problem, right, if you want larger redistribution of wealth. So I think that, again, it’s a good point, and I agree that it’s a good point, and I agree that there are a lot of international elements, maybe also shared by Constantinos that has been working a lot on internet policies, infrastructures, for the past decades, again, not revealing anyone’s age here, so please, Constantinos, the floor is yours. ≫ Thank you, Luca, good morning, everyone, and thank you for showing up, I really thought that this would be an empty room.

Konstantinos Komaitis:
So I think that one of the things that has happened in the past 20 years is that we have been attempting to think of the internet, which is a new medium, by applying old rules, and telecoms rules have been one such rule that we have been trying to apply ever since I remember starting in this field, and this is really a bad idea, because telecoms rules operate under the pretext of the Internet Governance Forum, and it doesn’t make any sense, and it doesn’t make really sense, because it will create, really, barriers to entry in the most unpredictable of ways. We are at the Internet Governance Forum, and all of us are talking about how to support the open, global, and interoperable internet, and there’s really no question that if we apply this policy, the open and global and interoperable internet, we will be able to do that. We will be able to do that. In the internet, the great thing about the internet is that really there is no network that is more important than another network. The more networks connect with one another, the bigger the value for the networks themselves, and also for their customers. And this also creates more resilience, because the more networks you have, the more decentralized the system is, and you are avoiding single points of failure. So, the Internet Governance Forum has created a system that works, that doesn’t require regulatory intervention, and it, of course, has allowed to have low barriers to entry. And, of course, it has fostered all these very collaborative relationships, which I’m sure collaboration is very challenging in the best of times, but so I’m sure that collaboration is challenging in this instance, but let’s not forget that the internet, the open internet, is an outcome of collaboration amongst many different and diverse actors. So, that was my first point. The second point is about the infrastructure and the idea that currently, or at least that’s the way the policy, this policy idea has been framed, is that there is only one actor contributing to infrastructure, and these are telecom operators. And this is not necessarily true, right? Technology companies, content and application providers, are contributing heavily in the internet ecosystem and its infrastructure, CDNs and data centers, and cloud services being clear examples. And the OECD is actually working on a report, which hopefully will be released next month, but the scoping paper made a really, really strong case about the diversity of infrastructure, and that it comes from the most unpredictable places that we can imagine. Municipalities. Contribute in internet and broadband infrastructure. Pension funds. HEDS funds contribute in infrastructure. Of course, telecom operators. Technology companies. Tower companies. So, we see a whole huge ecosystem where different players contribute to make sure that we have a reliable, secure, and sustainable infrastructure that can support the increasing demands on the internet. And, of course, we see a whole huge ecosystem where different players contribute to make sure that we have a reliable, secure, and sustainable infrastructure that can support the increasing demands of users, because the fact of the matter is that there is an increasing demand. Right now, everybody wants to stream video, and that is what it is. But there is this collaboration, if you want, that is happening. People are coming all together in order to make sure that networks can actually support this. So, I think that, in the end, the ECJ has been, has tried to ease the concerns that this is not a network neutrality issue. But I would bet, well, not a lot of money, because I don’t have them, but I would bet money on it that if that case were to go before the ECJ, it would have been a very, very different outcome. In Europe, I think we’ve heard it from everyone, we have the open internet regulation, and between 2020 and 2020, we have the open internet regulation, and between 2020 and 2021, there have been four cases that, and two of them actually said that the, it’s not just the technology discrimination that violates network neutrality, meaning that, you know, when you’re blocking or you’re throttling traffic, but it’s also economic discrimination. And two in particular cases focused specifically on that, on the idea that if you choose certain applications and counterproviders to making those deals and not apply those deals to everyone else, this is also against the open internet regulation and network neutrality. So we have to be clear about this, that this is predominantly about internet neutrality, and, again, I appreciate the effort to try to make it less so, but that is not really the case. And I will stop there.

Luca Belli:
Thank you. Thank you very much, Konstantinos. And actually, one of those cases, the Telenor case, was precisely about also zero rating, and it’s good to see, to hear from Marius that Europe now has abandoned this policy that many of us have criticized over the past year, but it’s also good to remind that until the pandemic times, in countries like Germany, there were, like, models like BingeOn that were literally, as the name suggests, it was an invitation to binge on video, and so the fact that now the telecom operators consider this increase in traffic as something problematic may be curious for those who were used to see the very same telecom offering for free video traffic, and encouraging users to use it as much as they can carelessly, actually, through BingeOn models. I think we have had a lot of very interesting suggestions so far, and I would like to hear from you. Thank you. Thank you very much, and I’m very happy we still have half an hour for debate, because I’m sure there will be a lot of debate. Now, let me start by opening the floor, because I know that you are not only very brave to be here at AFPA State in the last day, but I see a lot of people that may be interested in sharing comments or asking questions. So if you want to ask questions or share your ideas, please, you can line up and use this mic. Otherwise, we can, I think, go on. I think we may have, I’m sure we’ll have reactions here from the panel. If you have any questions, just raise your hand or line up there. Otherwise, we can start with the reactions here. Yes. Okay, yes. I think one confusion on whether network or fair share and unfair share violation of network neutrality is because of the presence of paid peering. Although it doesn’t account for a lot of traffic, I mean, well, it doesn’t account for a lot of connections. Most connections, more than 99.99% are settlement-free peering. But even through that small number of connections, a lot of traffic goes through that.

KS Park:
So if you look at the data from RCEP, the French regulator, although the number of connections is mostly settlement-free peering. In terms of volume of traffic, sizable portion of internet traffic go through paid peering points. But that does not ‑‑ and then, you know, FCC has not clearly, FCC in the U.S. or even BEREC have not clearly said, you know, that this is not the case. So I don’t think it’s a problem. I think it’s a problem. But I also think it’s a problem. I think, you know, the US government and the French government have not clearly said, you know, open internet regulation applies against paid peering. So what Telcos ‑‑ I mean, Telcos are not really saying it, but, you know, to be a Telcos advocate to make their arguments more reasonable, they may be saying that, oh, you know, this is a paid peering that has existed before, and, you know, Google has paid orange, paid peering fees, and Netflix has paid, Comcast paid peering fees before. So we just want to make it rule to make it more fair, and this does not violate net neutrality. But what they are forgetting is that this will, number one, will not be enforceable. Well, actually, it’s the same point. There is no number one or number two. It’s the same point. This will not be enforceable, because it will be mandatory paid peering, but what has really supported the information revolution is two rules. Freedom to connect and net neutrality. Okay? So freedom to connect and no freedom to charge for data delivery. These two rules are actually, you know, they are not enforceable. So freedom to connect and net neutrality. Freedom to charge for data delivery. These two rules are actually the two sides of the same coin. It is because the network participant, it is because ISPs are bound to this rule that they cannot charge for data delivery. They can charge only for connection capacity, not for data delivery. It is because of this what Mary called one-sided obligation, although I don’t think it’s obligation, it is more like an obligation. Thank you. exchange of promises over between ISPs to sell this global product to the Internet. It is not really an obligation imposed externally. But anyway, it is because of this one-sided obligation that all ISPs have a freedom to connect or not connect. And if a mandatory paid peering is imposed, what’s going to happen is Google, Netflix, they can say, oh, you know, we don’t want to pay fees to access customers in your network. And then they’re going to just connect. And the eyeballs in that country will no longer access Google. If all these content providers burdened with peering fees don’t connect, what are the regulators going to do? I mean, there’s really not much we can do if we want to keep the Internet as it is. So I think it’s unenforceable, and I think it is really pulling the rug under the fundamentals of Internet architecture, which is freedom to connect or not connect and remover of data delivery fees. Mary talked about how prices are falling. I want to ask whether your costs have been falling also. Because of the advancement of technology, putting together KPEX and OPEX, even with the catapulting of data traffic like five times, the network maintenance cost and development cost have remained the same in the past five years. So I thank you very much for the very extensive points, KS.

Maarit Palovirta:
And now, Merit, please, the floor is yours to reply. Yes. Maybe just on immediately on that question, the costs, no, they’re not falling. And if you read carefully the summary of the consultation, actually, there was some general language around that as well. They are not, of course, able to quote numbers because these are commercially sensitive, but the costs are not falling, no. I wanted to comment on the IP interconnection market because there is this, in Europe and much globally as well, still this very old-fashioned way to think that IP interconnection means peering and transit. And that’s the market, and that’s the base for competition. Now, in the last years, we’ve seen reports, including by BEREC, including by Analysis Mason, that, in fact, CDNs should now be considered a substitute to transit and peering. So in fact, the market definition has effectively changed, and we should be looking at a market that also includes the CDNs. Now, if you look at the CDNs that, for example, many tech players have in Europe, these are, of course, often proprietary infrastructures, and they’re also infrastructures where these owners and the operators of these CDNs, they sell capacity at a price to whoever needs capacity, so whoever needs their content to be delivered. And these prices, because they’re proprietary networks, are not publicly known. They are not considered in a market analysis when we look at the peering markets. And we actually are very happy to see that BEREC now, in their program for next year, have quite an ambitious kind of a task. So they will be reassessing the IP interconnection market, and very much also taking into account the CDNs, the role of CDNs, and how have they contributed to the IP interconnection market, because this is, of course, a development that has substantially changed the scene in the last five years. And the last assessment that BEREC did was five years ago. So I think that when we talk about the regulatory asymmetry, this is one example of such asymmetry. And if you look at the internet ecosystem a bit more widely, we recently were with Jean-Jacques in the same BEREC workshop talking about submarine cables, undersea connectivity. And there we see a little bit the same phenomenon. So we have these public cables where you sell capacity to others. Some of them are in consortia, including with European operators, consortia with some of the big tech companies. But there is a substantial amount of cables that are purely private. And for example, European operators don’t have the investment capacity to be running many private cables like that. But then, of course, we hear that about now, about 70% of, for example, the traffic between the US and EU goes through these private cables. So not actually being in the public best effort internet. And we are again here observing that the telcos who in consortium are now operating these cables that are publicly available and sell capacity. We should also think about what does that do to neutrality? So some content gets a real highway in a proprietary network, whereas other content has to go through these operated publicly available channels at a kind of best effort level. And we really applaud, I think it’s great, I mean, this is not a regulated market. So I think it is great that, of course, that companies find that you invest and there’s good things. But again, going back to the regulatory asymmetry, then when we look at the positioning of how this traffic then comes into, in our case, Europe, and is then divided and goes through the national networks, we need to look at the market power. What does it do in this kind of bigger picture? And hence, we are very much also then welcoming the BEREC’s upcoming work on this. They will do work on the entry of caps into the ECN market in European language, so content providers into the telecom markets. They are also going to be analyzing the role of cloud computing in this context. And also then doing a kind of a holistic mapping of the submarine connectivity scenario. So I think that that will give us a bigger picture. And I understand that the fair contribution, it’s very difficult sometimes to define because it is a very specific point in that ecosystem. But we need to look at the bigger picture and then see what is the impact of this interconnection point vis-à-vis the access network investment in Europe. Thank you very much for this. And actually, it will be very useful also then to study the methodology that BEREC will develop to this kind of study that could even be used as a good practice exported, maybe globally. I’m sure that Jean-Jacques has a reaction to your reaction, so please, Jean-Jacques.

Jean Jaques Sahel:
Thank you. I want to pick up on both Mariette and KS’s points, and I want to start by thanking Mariette because I think it’s been a debate that’s really interesting where there have been these accusations flying about fair contributions saying that content and applications providers do not contribute. But as Mariette has just explained very extensively, there are pretty massive contributions by content and applications providers. Of course, as Konstantinos was saying before, I think it was you, content and application providers help this ecosystem by providing the content and services without which, frankly, no one would pay telecom operators for an internet subscription. The only revenue that a telecom operator would make if it weren’t for the content and applications that we invest in would be telephony and SMS. If they want to go back to that, that’s fine. They don’t have a must-carry obligation for the internet. They can stop being internet providers because the internet is about a network of network. Once you connect to one endpoint, you have access to the global network. That’s what the internet is about. But you don’t have to provide that service connecting to the global internet, the global unique open internet. You can provide private networks. That’s a very profitable business. Or indeed, you can invest in new types of technologies that are related, like CDNs. You can be a CDN operator, and in fact, a lot of telecom operators have developed great CDN services, which they’re making a lot of money on. So going back to the contributions by content and application providers, so there’s that massive investment by content and application providers in innovating, in creating products that will delight customers and encourage them, therefore, to subscribe to internet service provider services and upgrade their subscriptions to things like 5G, for instance, or Fiber. Then you mentioned CDNs, and I think that’s really important. Yes, there are CDNs. What do CDNs do? They help to transport traffic much more easily, and so that’s another payment that content and application providers can make in order to deliver the traffic to end users with better quality, another form of contribution. And then you mentioned things like subsea cables, and the great thing about that is that whether they are private or public, or a mix of public and private networks, again, they help to bring traffic much faster, more efficiently, and save massive costs for ISPs, because instead of ISPs having to fetch the content from another part of the world, the traffic is brought by those content and application providers, 99% of the way to the user, and the telecom providers can do the last mile. That’s a huge cost savings for the operators. Cloud service is the same way, and when we look at how the cloud and associated services and some of the data analytics and AI can help to optimize network to support operators, which is what is happening today, again, it’s saving costs for ISPs, and it’s providing new avenues for monetization for the telcos. So as a summary, we’re in a situation where there is absolutely zero point in claiming that there is no contribution by the content and application providers sectors, because there is enormous contribution, as just exposed by Marit, and more importantly, I think we should look to the future, we should look at new technologies, we should look at the evolution of this market, as BEREC and OECD and others are doing, and look at the positives of how we can move forward together. There are improvements we can make to regulatory frameworks in Europe and elsewhere, in Brazil, etc. I think we should focus on that, on the real problems, look at the evidence, look at how we can help each other as an ecosystem, focus on that, rather than some people trying to instigate fights and fake battles when there’s none to be had, and that would do a disservice

Luca Belli:
to everyone. Excellent. Thank you very much for this. I see there are questions, and I also see there is one, at least, from the audience, so let’s start with the question from the online… So let’s start with the question on site, and then if we can have some of these online participants speaking, we can do it, otherwise we will only stay with the… Otherwise, the online participant can type the question and we will read it. Okay, thank you, Luca. Good morning, everybody.

Audience:
My name is Raul Echeverria, I’m from Latin America Internet Association, and this is… I don’t know if you know, but everything that is discussed in Europe has a huge impact in the policy agenda in Latin America, and this is not the exception. So it would be very funny that Europe abandoned the idea of moving ahead with this, and we will have probably some decisions, policy decisions, in some countries in the region. Some years ago, all the interconnection between ISPs and content providers used to happen in Miami, and in the last few years, the Internet technical community has done a huge effort to develop a complete new interconnection ecosystem, and has had a very positive impact in the access for the people. So what would happen if, for example, we are discussing this now in Brazil and also in the Caribbean, and that is very interesting, because it’s a kind of paradox, because Brazil is probably the country that has invested more in exchange points in the world, maybe. And the Caribbean is probably one of the places who more needs improved interconnection. So it doesn’t make sense that those are the two places that we are discussing this. But so what will happen? Who will win with this? As the colleague pointed out, 99 percent of the people are informal and for free. Why this? Is it because the content providers are ISPs and telecoms are stupid? No. It’s because all of them understand that they are adding value to each other. And so the market already spoke. So what will happen with that? What will happen, in my view, is that if a country like Brazil adopt a policy on this, obviously, companies will pay what they have to pay, and they will follow the law. But so they will not have incentives to bring their caches and do peerings in the exchange points and to bring the caches into the telcos infrastructure. So they will say, OK, we will pay what we have to pay, but now you have to pick our contents in Miami. So we are going 15 years back. And they will say, ah, and don’t forget. Obviously, it will not be informal. Now we will have to sign some contracts. So we will involve our legal departments, and it will take one year to have the contracts in place. Ah, and it will not be for free. We will have to negotiate that. At the end of the day, the situation will be the same than now. They will pay exactly what they have to share, to have a zero sum in the agreements. So there will not be winners, but there will be losers. Who are the losers? Small ISPs, small content providers, small platforms, small Internet companies. That will be the end of the tale at the time of starting negotiations with the other parties. So I think that the disruption will be huge, and at the end of the day, the result for telcos and content providers will be probably the same. Thank you very much, Raul, for providing these additional points. Do we have our remote participant, participant speaker? Can he or she speak? I don’t see any satisfactory reaction from the technical team. Hello? Oh, yeah. Oh, Gonzalo. We meet again. Hello, Luca. Hello. I don’t see you. Hello. Please. Hi. This is Gonzalo. I work for Telefonica, an ISP present in Europe, an ETNO member, so working with Marek on the first issue. I just wanted to address a few of the comments that have been raised during the session regarding the cost of the networks are going down and related to revenues, and also related to how this virtual cycle is benefiting all of us. I would like to stress that the telecom revenues have been decreasing for the European telecom sector 30% since 2011, where, for example, in the US, those same revenues have been increasing 18%. And at the same time, the returns of investment of the capital employees, which actually takes into account the revenues, the cost and investment, has been lower to 6% in Europe, where in the US, for example, that’s at the level of 14. And actually, that means that in Europe, the returns are lower than the cost of capital. So the money that we have to pay to get the funds for those investments is costing us more than the returns that we make on those investments. So that means that this is not really a virtuous cycle. It is not a situation where we all benefit. In fact, it was the case at the very beginning of the internet, but it is not the case anymore. It seems that when telecoms returns are lower than the cost of capital, we are losing money on every investment that we do. So basically, what we aim with FairShare is actually trying to foster the investments and to keep up the investment for the quality of the networks that the European needs. And for example, we can see that even though investments have been at levels of 20% over revenues, which is similar to the rest of the world, in terms of investment in euros per capita, in Europe, we have been at levels of 100 euros, where in US, that’s 200 euros per capita. And that has meant that, for example, the coverage of 5G networks, the take-up of 5G in Europe is 15%, whereas in Korea, it’s 50%, or in the US, it’s 40%. So Europe is already falling behind in the development of networks, and that’s why we want to change the situation. And one last comment on the comment from Mr. Bertone, I think that we have read a different publication, because what I see that Mr. Bertone has commented is that he wants to go ahead with a Digital Network Act. And the fact, I have not seen any place that it has been delayed till 2025. If you see how a legislative process works in Europe, it is impossible to implement a legislative process in one or two year time. So actually, even though the process might be starting, the proposal might be coming up in early 2024, it’s impossible. to have this passed through the European Parliament before the general elections for the parliament taking place in June. So actually, I don’t see any delay there, but being realistic and taking into account that the processes in Europe take two years the least and in some cases, as you have seen with privacy, DMA and so others, it takes more than three, even four times. Thank you. Thank you. Thank you very much, Gonzalo, for these elements.

Luca Belli:
I think we have less than 10 minutes left, so I would ask all of the panelists to have a last chance to provide a final remark, some food for thought, because we have already had a lot of very interesting comments, discussions. Sorry? This was not the question. Okay, I thought there is another question. What is the question? But maybe this question from the, so let me also thank Shilpa Singh from the University of Melbourne, who is our remote moderator. So do we have a question from the online participant? Can you take a mic? Can we pass a mic? So this question may inspire your last thoughts. And yes, you.

Audience:
Please, Shilpa. Yeah, my very rough understanding for contribution is to redistribute the money from OTT to telecoms. My question is to share what. In the previous session that this person organized, this person shared the same opinion, and is it okay if it is treated as this particular money is used as a universal service fund for rural areas?

Luca Belli:
To improve the whole balance sheet to telecom is not adequate in her opinion. Okay, excellent. So I think the question is whether if this money would be used to improve universal service funds or for other uses. You can reflect on this question while you think about your final thought.

Kamila Kloc:
So I would like to start maybe with Camilla, and so we follow the order. Let’s say one minute per person. I’m gonna be quick. I can see like a battle of titans in here of different industries, which is also important to understand all of the arguments, but beyond of these arguments and beyond of the argument that we’re mostly focused also in the global north, like I can understand that Europe has a different context, but we are talking about two practices that harm consumers. Zero rating, which continues to affect the global south, favoring tax, by alleged the free and limited access, which in practices is a bundle, like it obliged people to use some apps. And fair share that might favorize telcos and potentially increases the prices and reduce quality as we were talking about Korea. So in both cases, we are talking about the distortion of telecom market, of present practices or future practices, but in the end, we are talking about, we are not focusing on how this affects consumers in the end, we’re talking about industry’s interests. So let’s focus on consumers when we’re talking about this on open internet access, on meaningful connectivity and how to increase access, not only on how we can impulsionate companies. So thank you.

Luca Belli:
Thank you.

Maarit Palovirta:
So just to the question that we said, yes, of course, from our point of view, these funds would be going towards investment and especially in those areas where we don’t have coverage, but also in capacity. But going to my final comment, I would like to say that Thomas was saying that it might be difficult for us to be here, but I would like to say it’s actually a real pleasure because we took a decision early on in this discussion, actually, before we published the very first report, that we want to have the discussion with everybody, with all stakeholders openly ourselves and not hiding behind think tanks or consultancies. I think that we are trying to live true to this intention that we had. And I think I will provide one personal thought and one political thought. Personal thought, I think this is a very healthy checkup of the internet ecosystem to see where we are today. And in the case of Europe, where we have a kind of intense regulatory framework to see what needs to be done there. And a political message comes from our friend, all of our friend, Thierry Breton, who in the LinkedIn post two days ago, says that we need, quote, a bold, future-oriented, game-changing Digital Networks Act to redefine the DNA of our telecoms regulation, unquote. And I’m just really pleased, and we put a statement out at non-GSMA that the commission has this ambition to actually deliver a new regulatory framing for us. And this, we hope, means that there will be fair contribution, but we also hope very much that they will address some of the pain points on competition, on scale, on sector-specific regulation

Luca Belli:
that I was describing earlier. So thank you. Thank you. So my final message would be, as a regulator in Brazil, that we really should assess and define precisely the disease that we want to heal before prescribing the medication.

Artur Coimbra:
So you have a commitment of regulator in Brazil of having an evidence-based approach. And well, if the problem is lack of money for investment in networks, we should design a model that allows more money for investment in networks. And for example, depending on the way you design a fair share, you may or may not reach that objective. So if there’s a competition between prices and you create a fair share, there’s a great chance that this money runs away by lowering more and more the prices, because there’s competition overpricing for the user, and so no money is left for investment. So this should be designed in a way to get what we really want, which is money for investment. So that’s the final message that I bring, that the commitment of Brazilian regulator that we’ll make all the effort to gather evidence to try and define the disease before prescribing the medication. Excellent. Indeed, evidence-based policy should be based on evidence. And we are very happy that this panel is providing a lot of very good thoughts on how to collect this evidence. Final round of…

Luca Belli:
Thank you, and thanks again for having us.

Jean Jaques Sahel:
We’ve been trying to organize a workshop at the IGF. We did a proposal, really balanced panel, et cetera. It wasn’t accepted, so I’m really glad that Luca and team organized this. Really, really glad that we were able to be together and here as representative of all stakeholder groups. And I think that’s one of the things that’s come out here. All stakeholders should be invited to speak. I hope that we can see that across all those discussions, that we can have consumer organizations, civil society organizations, alongside industry, academics, technical community being regularly and proactively invited to speak at these events in Brussels, in Brasilia, and elsewhere. I think that would be fantastic to see in the future. And then I think what we’ve also heard is that there’s clearly, from all the speakers, including Etno, that it’s quite clear that there are massive investments, contributions by content and application providers, including to network infrastructure. And so CAPS can contribute fairly. And I think that’s quite a clear conclusion here. Yeah, I think just going back to what Arthur was rightly saying, this should be about evidence. I think there’s just a lot of lobbying arguments that are flying around, quotes from this, from that. I think we should focus on expert analysis, BEREC, telecom regulators, OECD, German Motor Police Commission, and others who are studying this. And indeed, as Marit was saying, it should be taking a broad, holistic perspective on the market and its evolution, absolutely. So we look forward to further analysis of what’s already been produced by these expert organizations and also what they’re already starting to work on next. And just to finish up, I think really important what we’ve heard already, just starting with the Global South perspective, let’s remind ourselves what this is all about. This is about access to information at the end of the day. This is what the global and open internet brought us. We did not have such an amazing access to the internet globally through a simple connection to one network just 30 years ago, or indeed 20 years ago, or indeed 10 years ago in many parts of the world. And it’s thanks to the work of many telecom operators, content providers, local communities that have developed this, et cetera, et cetera, and the technical community in this room and in this venue are to be thanked for all of that. And so let’s be really careful in these debates that we don’t tinker with the foundational open nature of the internet that means that we have access to all this information and this utility that is good for consumers, for us as users, for our everyday lives and our everyday economies, and it has and will continue to boost those economies and benefit our societies in the future. Let’s not tinker with that. Let’s not end up with information winners and losers, whether it’s in the global south or the global north. This is about the global internet, whether it’s north, south, west, or east, and we shouldn’t tinker with this basic open foundations.

KS Park:
Thank you. Correction. I think somebody said Korean telecoms are profiting a lot from the standard pay model so they could afford to increase the 5G coverage to what? 50%? Well, that is because only Korean telecoms with their oligopoly hold on the market, they sold the new phones only with the 5G features. So consumers are forced to buy 5G phones, but the connectivity is, 5G connectivity is so bad that there are 5G consumers filing Class X and lawsuits against telecoms right now, and so bad that 5G bandwidth license was taken away by the government. So, you know, no rose picture there. And I think this answer is the last question that just came in. Will telcos use the new revenues from standard pay model for developing more network? I don’t think so. I mean, monopoly, when it becomes profitable, it becomes self-perpetuating. They want more profit. Korea case shows that’s not the case. I mean, we are in Japan. Internet penetration rate, both Japan, Korea at the top, that’s just penetrate, like where the internet is. In terms of connectivity, if you use a, I’ve never used a Wi-Fi this fast in Korea. The difference between Japan and Korea, big telcos in Korea are not participating in internet exchanges. There is no internet exchange in Korea. Big telcos in Japan, they are all participating in big internet exchanges in Japan. In terms of connectivity across the country, Japan is much better. When I said cost is falling, I meant cost per unit of megabyte that is delivered. That is definitely falling. Somebody’s lying. If somebody’s saying that it’s not falling, therefore, you know, despite the falling revenues, because of the falling cost, that’s why the profit is being maintained by European telcos. But the final question, it’s unfair, right? I mean, if it’s becoming so unprofitable that, you know, profitable that you cannot maintain privacy,

Luca Belli:
maybe we should turn telecoms into public utilities. A lot of interest in those. Yes, go ahead. I see applause in the room. Yeah, I want to go back to the question again. So as Spark has perfectly outlined, even if that money were to be invested in the network,

Thomas Lohninger:
the quality that we as users would experience would still be worse than we have today. And there’s also ample studies and evidence that money is actually not a bottleneck in network rollout. So very often, there are other factors at play. And so, particularly in the context of Europe, it would not really help us solve the problems in rural areas that we still have. And to me, really, I want to close on what I deem to be the most shameful thing as a European here, talking about this issue with now this ludicrous idea having become picked up by so many other world regions. And there’s only one reason for that, corporate capture of the European Commission. I mean, a former CEO of France Telecom has managed to make his way into the commission, warmed up a 10-year-old idea that at its face is just crazy for everyone. And now we have a public consultation with everybody’s voices proving that it is crazy, proving that it is refused by everyone except the telcos. And what is the response that we hear today from Telefonica? Oh, we don’t make enough money. Sorry. But if that is your sole argument, then yeah, maybe we should really rethink business models. And ultimately, what Digital Networks Act or whatever it is called, there will be something. I mean, Breton has to deliver for his cronies at least something that will deregulate the market. And I fear that, again, that this will go against the success recipe for telecoms, which is competition and cooperation. We don’t need like the US, an oligopoly of a few very big mafia-like telecom companies. And yeah, I think I’ll leave it at that.

Luca Belli:
The final word to Cosentino. Oh, great. Thank you so much.

Konstantinos Komaitis:
So very quickly, to the last question about USFs, we were having a conversation yesterday. I don’t think that we could discuss USFs, but I don’t think that this is what telcos want. Here I have, it’s a blog post from Telefonica saying why Europe shouldn’t copy the USA’s Universal Service Fund. So, and why the direct payments is actually a better option. And Europe really is setting a very bad example. And this is why, and this will be sort of a pitch, there is a global concern from civil society. There was a statement that was released yesterday. More than 20 organizations from around the world, civil society organizations, co-signed it. Brazil, India, in Europe, the United States, they express the same concerns about the same issue. And I think that it is time that if we want to have a conversation about infrastructure, let’s have it. But this is not the way to do it. Because obviously, no one really wants this conversation to happen apart from telecom operators. All right, I think what we said today

Luca Belli:
that there is also other ways of having the conversation. And I’m very happy we had a lot of different views represented. I would like to thank everyone for their effort, not only to be here at AFA SAIT, to fly here, or even to contribute as Gonzalo did while in Europe. So a lot of very good ideas, a lot of food for thought. I think that everyone here now has the sufficient element to form its own opinion independently.

Artur Coimbra

Speech speed

154 words per minute

Speech length

1065 words

Speech time

415 secs

Audience

Speech speed

156 words per minute

Speech length

1304 words

Speech time

501 secs

Jean Jaques Sahel

Speech speed

194 words per minute

Speech length

2357 words

Speech time

729 secs

KS Park

Speech speed

139 words per minute

Speech length

2034 words

Speech time

880 secs

Kamila Kloc

Speech speed

169 words per minute

Speech length

1063 words

Speech time

378 secs

Konstantinos Komaitis

Speech speed

229 words per minute

Speech length

1094 words

Speech time

286 secs

Luca Belli

Speech speed

200 words per minute

Speech length

2101 words

Speech time

631 secs

Maarit Palovirta

Speech speed

173 words per minute

Speech length

2657 words

Speech time

921 secs

Thomas Lohninger

Speech speed

227 words per minute

Speech length

1475 words

Speech time

390 secs