International multistakeholder cooperation for AI standards | IGF 2023 WS #465

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Matilda Road

The AI Standards Hub is a collaboration between the Alan Turing Institute, British Standards Institute, National Physical Laboratory, and the UK government’s Department for Science, Innovation, and Technology. It aims to promote the responsible use of artificial intelligence (AI) and engage stakeholders in international AI standardization.

One of the key missions of the AI Standards Hub is to advance the use of responsible AI by encouraging the development and adoption of international standards. This ensures that AI systems are developed, deployed, and used in a responsible and ethical manner, fostering public trust and mitigating potential risks.

The involvement of stakeholders is crucial in the international AI standardization landscape. The AI Standards Hub empowers stakeholders and encourages their active participation in the standardization process. This ensures that the resulting standards are comprehensive, inclusive, and representative of diverse interests.

Standards are voluntary codes of best practice that companies adhere to. They assure quality, safety, environmental targets, ethical development, and promote interoperability between products. Adhering to standards helps build trust between organizations and consumers.

Standards also facilitate market access and link to other government mechanisms. Aligning with standards allows companies to enter new markets and enhance competitiveness. Interoperability ensures seamless collaboration between different systems, promoting knowledge sharing and technology transfer.

The adoption of standards provides benefits such as quality assurance, safety, and interoperability. Compliance ensures that products and services meet defined norms and requirements, instilling confidence in their reliability and performance. Interoperability allows for the exchange of information and collaboration, fostering innovation and advancements.

In conclusion, the AI Standards Hub promotes responsible AI use and engages stakeholders in international AI standardization. It fosters the development and adoption of international standards to ensure ethical AI use. Standards offer benefits like quality assurance, safety, and interoperability, building trust between organizations and consumers, enhancing market access, and linking to government mechanisms. The adoption of standards is crucial for responsible consumption, sustainable production, and industry innovation.

Ashley Casovan

Standards play a crucial role in the field of artificial intelligence (AI), ensuring consistency, reliability, and safety. However, the lack of standardisation in this area can lead to confusion and hinder the advancement of AI technologies. The complexity of the topic itself adds to the challenge of developing universally accepted standards.

To address this issue, the Canadian government has taken proactive steps by establishing the Data and AI Standards Collaborative. Led by Ashley, representing civil society, this initiative aims to comprehensively understand the implications of AI systems. One of the primary goals of the collaborative is to identify specific use cases and develop context-specific standards throughout the entire value chain of AI systems. This proactive approach not only helps in ensuring the effectiveness and ethical use of AI but also supports SDG 9: Industry, Innovation, and Infrastructure.

Within the AI ecosystem, various types of standards are required at different levels. This includes certifications and standards for both evaluating the quality management systems and ensuring product-level standards. Furthermore, there is a growing interest in understanding the individual training requirements for AI. This multifaceted approach to standards highlights the complexity and diversity within the field.

The establishment of multi-stakeholder forums is recognised as a positive step towards developing AI standards. These forums play a vital role in establishing common definitions and understanding of AI system life cycles. North American markets have embraced such initiatives, including the National Institute of Standards and Technology’s (NIST) AIRMF, demonstrating their effectiveness in shaping AI standards. This collaborative effort aligns with SDG 17: Partnerships for the Goals.

Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspectives is paramount for ensuring that the standards address the needs and challenges of different communities. Effective data analysis and processing within the context of AI standards necessitate inclusivity. This aligns with SDG 10: Reduced Inequalities as it promotes fairness and equal representation in the development of AI standards.

Engaging Indigenous groups and considering their perspectives is critical in developing AI system standards. Efforts are being made in Canada to include the voices of the most impacted populations. By understanding the potential harms of AI systems to these groups, measures can be taken to mitigate them. This highlights the significance of reducing inequalities (SDG 10) and fostering inclusivity.

Given the global nature of AI, collaboration on an international scale is essential. An international exercise through organisations such as the Organisation for Economic Co-operation and Development (OECD) or the Internet Governance Forum (IGF) is proposed for mapping AI standards. Collaboration between countries and regions will help avoid duplication of efforts, foster harmonisation, and promote the implementation of effective AI standards globally.

It is important to recognise that AI is not a monolithic entity but rather varies in its types of uses and associated harms. Different AI systems have different applications and potential risks. Therefore, it is crucial to engage the right stakeholders to discuss and address these specific uses and potential harms. This aligns with the importance of SDG 3: Good Health and Well-being and SDG 16: Peace, Justice, and Strong Institutions.

In conclusion, the development of AI standards is a complex and vital undertaking. The Canadian government’s Data and AI Standards Collaborative, the involvement of multi-stakeholder forums, the importance of inclusivity and engagement with Indigenous groups, and the need for international collaboration are all prominent factors in shaping effective AI standards. Recognising the diversity and potential impact of AI systems, it is essential to have comprehensive discussions and involve all relevant stakeholders to ensure the development and implementation of robust and ethical AI standards.

Audience

The analysis reveals that the creation of AI standards involves various bodies, but their acceptance by governments is not consistent. In particular, standards institutions accepted by the government are more recognized than technical community-led standards, such as those from the IETF or IEEE, which are often excluded from government policies. This highlights a discrepancy between the standards created by technical communities and those embraced by governments.

Nevertheless, the analysis suggests reaching out to the technical community for AI standards. The technical community is seen as a valuable resource for developing and refining AI standards. Furthermore, the analysis encourages the creation of a declaration or main message from the AI track at the IGF (Internet Governance Forum). This indicates the importance of consolidating the efforts of the AI track at IGF to provide a unified message and promote collaboration in the field of AI standards.

Consumer organizations are recognized as playing a critical role in the design of ethical and responsible AI standards. They represent actual user interests and can provide valuable insights and data for evidence-based standards. Additionally, consumer organizations can drive the adoption of standards by advocating for consumer-friendly solutions. The analysis also identifies the AI Standards Hub as a valuable initiative from a consumer organization’s perspective. The Hub acknowledges and welcomes consumer organizations, breaking the norm of industry dominance in standardization spaces. It also helps bridge the capacity gap by enabling consumer organizations to understand and contribute effectively to complex AI discussions.

The analysis suggests that AI standardization processes should be made accessible to consumers. Traditionally, standardization spaces have been dominated by industry experts, but involving consumers early in the process can help ensure that standards are compliant and sustainable from the start. User-friendly tools and resources can aid consumers in understanding AI and AI standards, empowering them to participate effectively in the standardization process.

Furthermore, the involvement of consumer organizations can diversify the AI standardization process. They represent a diverse range of views and interests, bringing significant diversity into the standardization process. Consumer International, as a global organization, is specifically mentioned as having the potential to facilitate this diversity in the standardization process.

In conclusion, the analysis highlights the importance of collaboration and inclusivity in the development of AI standards. It underscores the need to bridge the gap between technical community-led standards and government policies. The involvement of consumer organizations is crucial in ensuring the ethical and responsible development of AI standards. Making AI standardization processes accessible to consumers and diversifying the standardization process are essential steps towards creating inclusive and effective AI standards.

Wansi Lee

International cooperation is crucial for the standardization of AI regulation, and Singapore actively participates in this process. The country closely collaborates with other nations and engages in multilateral processes to align its AI practices and contribute to global standards. Singapore has initiated a mapping project with the National Institute of Standards and Technology (NIST) to ensure the alignment of its AI practices.

In addition, multi-stakeholder engagement is considered essential for the technical development and sharing of AI knowledge. Singapore leads in this area by creating the AI Verify Testing Framework and Toolkit, which provides comprehensive tests for fairness, explainability, and robustness of AI systems. This initiative is open-source, allowing global community contribution and engagement. The AI Verify Toolkit supports responsible AI implementation.

Adherence to AI guidelines is important, and the Singapore government plays an active role in setting guidelines for organizations. Implementing these guidelines ensures responsible AI implementation. The government also utilizes the AI Verify Testing Framework and Toolkit to validate the implementation of responsible AI requirements.

Given Singapore’s limited resources, the country strategically focuses its efforts on specific areas where it can contribute to global AI conversations. Singapore adopts existing international efforts where possible and fills gaps to make a valuable contribution. Despite being a small country, Singapore recognizes the significance of its role in standard setting and strives to make a meaningful impact.

The Singapore government actively engages with industry members to incorporate a broad perspective in AI development. Input from these companies is valued to create a comprehensive and inclusive framework for responsible AI implementation.

The establishment of the AI Verify Foundation provides a platform for all interested organizations to contribute to AI standards. The open-source platform is not limited by organization size or location, welcoming diverse perspectives. Work done on the AI Verify Foundation platform is rationalized at the national level in Singapore and supported globally through various platforms, such as OECD, GPA, or ISO.

In conclusion, Singapore recognizes the importance of international cooperation, multi-stakeholder engagement, adherence to guidelines, strategic resource management, and industry partnerships in standardizing AI regulation. The country’s active involvement in initiatives such as the AI Verify Testing Framework and Toolkit and the AI Verify Foundation demonstrates its commitment to responsible AI development and global AI conversations. The emphasis on harmonized or aligned standards by Wansi Lee further highlights the need for a unified approach to AI regulation.

Florian Ostmann

During the session, the role of AI standards in the responsible use and development of AI was thoroughly explored. The focus was placed on the importance of multi-stakeholder participation and international cooperation in developing these standards. It was recognized that standards provide a specific governance tool for ensuring the responsible adoption and implementation of AI technology.

In line with this, the UK launched the AI Standards Hub, a collaborative initiative involving the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory. The aim of this initiative is to increase awareness and participation in AI standardization efforts. The partnership is working closely with the UK government to ensure a coordinated approach and effective implementation of AI standards.

Florian Ostmann, the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, stressed the significance of international cooperation and multi-stakeholder participation in making AI standards a success. He emphasized the need for a collective effort involving various stakeholders to establish effective frameworks and guidelines for AI development and use. The discussion highlighted the recognition of AI standards as a key factor in ensuring responsible AI practices.

The UK government’s commitment to AI standards was reiterated as the National AI Strategy published in September 2021 highlighted the AI Standards Hub as a key deliverable. Additionally, the AI Regulation White Paper emphasized the role of standards in implementing a context-specific, risk-based, and decentralized regulatory approach. This further demonstrates the UK government’s understanding of the importance of AI standards in governing AI technology.

The AI Standards Hub actively contributes to the field of AI standardization. It undertakes research to provide strategic direction and analysis, offers e-learning materials and in-person training events to engage stakeholders, and organizes events to gather input on AI standards. By conducting these activities, the AI Standards Hub aims to ensure a comprehensive approach to addressing the needs and requirements of AI standardization.

The discussion also highlighted the significance of considering a wider landscape of AI standards. While the AI Standards Hub focuses on developed standards, it was acknowledged that other organizations, like ITF, also contribute to the development of AI standards. This wider perspective helps in gaining a holistic understanding of AI standards and their implications in various contexts.

Florian Ostmann expressed a desire to continue the discussion on standards and AI, indicating that the session had only scratched the surface of this vast topic. He welcomed ideas for collaboration from around the world, underscoring the importance of international cooperation in shaping AI standards and governance.

In conclusion, the session emphasized the role of AI standards in the responsible use and development of AI technology. It highlighted the significance of multi-stakeholder participation, international cooperation, and the need to consider a wider landscape of AI standards. The UK’s AI Standards Hub, in collaboration with the government, is actively working towards increasing awareness and participation in AI standardization. Florian Ostmann’s insights further emphasized the importance of international collaboration and the need for ongoing discussions on AI standards and governance.

Aurelie Jacquet

The analysis examines multiple viewpoints on the significance of AI standardisation in the context of international governance. Aurelie Jacquet asserts that AI standardisation can serve as an agile tool for effective international governance, highlighting its potential benefits. On the other hand, another viewpoint stresses the indispensability of standards in regulating and ensuring the reliability of AI systems for industry purposes. Australia is cited as an active participant in shaping international AI standards since 2018, with a roadmap focusing on 40,2001 in 2020. The adoption of AI standards by the government aligns with the NSW AI Assurance Framework, strengthening the use of standards in AI systems.

Education and awareness regarding standards emerge as important factors in promoting the understanding and implementation of AI standards. Australia has taken steps to develop education programs on standards and build tools in collaboration with CSIRO and Data61, leveraging their expertise in the field. These initiatives aim to enhance knowledge and facilitate the adoption of standards across various sectors.

Despite having a small delegation, Australia has made significant contributions to standards development and has played an influential role in shaping international mechanisms. Through collaboration with other countries, Australia strives to tailor mechanisms to accommodate delegations of different sizes. However, it is noted that limited resources and time pose challenges to participation in standards development. In this regard, Australia has received support from nonprofit organisations and their own government, which enables experts to voluntarily participate and contribute to the development of standards.

Context is highlighted as a crucial element for effective engagement in standards development. Australia’s experts have been actively involved in developing white papers that provide the necessary background and context for standards documents. This ensures that stakeholders have a comprehensive understanding of the issues at hand, fostering informed discussions and decision-making processes.

The analysis also highlights the challenges faced by SMEs in the uptake of standards. Larger organisations tend to adopt standards more readily, leaving SMEs at a disadvantage. Efforts are underway to address these challenges and make standards more accessible and fit for purpose for SMEs. This ongoing discussion aims to create a more inclusive environment for all stakeholders, regardless of their size or resources.

The significance of stakeholder inclusion is emphasised throughout the analysis. Regardless of delegation size, stakeholder engagement is seen as critical in effective standards development. Australia has actively collaborated with other countries to ensure that mechanisms and processes are tailored to their respective sizes, highlighting the importance of inclusiveness in shaping international standards.

Standards are seen as enablers of interoperability, promoting harmonisation of varied perspectives in AI regulations. Different regulatory initiatives and practices in AI are deemed beneficial, and standards play a key role in facilitating interoperability and bridging gaps between different approaches.

Moreover, the adoption of AI standards is advocated as a means to learn from international best practices. Experts from diverse backgrounds can engage in discussions, enabling nations to develop policies and grow in a responsible manner. The focus lies on using AI responsibly and scaling its application through the use of interoperability standards.

In conclusion, the analysis underscores the importance of AI standardisation in international governance. It highlights various viewpoints on the subject, including the agile nature of AI standardisation, the need for industry-informed regulation, the significance of education and awareness, the role of context, the challenges faced by SMEs, the importance of stakeholder inclusion, and the benefits of interoperability and learning from international best practices. The analysis provides valuable insights for policymakers, industry professionals, and stakeholders involved in AI standardisation and governance.

Nikita Bhangu

The UK government recognizes the importance of AI standards in the regulatory framework for AI, as highlighted in the recent AI White Paper. They emphasize the significance of standards and other tools in AI governance. Digital standards are crucial for effectively implementing the government’s AI policy.

To ensure effective standardization, the UK government has consulted stakeholders to identify challenges in the UK. This aims to provide practical tools for stakeholders to engage in the standardization ecosystem, promoting participation, collaboration, and innovation in AI standards.

The establishment of the AI Standards Hub demonstrates the UK government’s commitment to reducing barriers to AI standards. The hub, established a year ago, has made significant contributions to the understanding of AI standards in the UK. Independent evaluation acknowledges the positive impact of the hub in overcoming obstacles and promoting AI standards adoption.

The UK government plans to expand the AI Standards Hub and foster international collaborations. This growth and increased collaboration will enhance global efforts towards achieving AI standards, benefiting industries and infrastructure. Collaboration with international partners aims to create synergies between AI governance and standards.

Representation of all stakeholder groups, including small to medium businesses and civil society, is crucial in standard development organizations. However, small to medium digital technology companies and civil society face challenges in participating effectively due to resource and expertise limitations. Even the government, as a key stakeholder, lacks technical expertise and resources.

The UK government is actively working to improve representation and diversity in standard development organizations. Initiatives include developing a talent pipeline to increase diversity and collaborating with international partners and organizations such as the Internet Governance Forum’s Multi-Advisory Group. Existing organizations like BSI and IEC contribute to efforts for diverse and inclusive standards development organizations.

In conclusion, the UK government recognizes the importance of AI standards in the regulatory framework for AI and actively works towards their implementation. Consultation with stakeholders, establishment of the AI Standards Hub, and efforts to increase international collaborations reduce barriers and promote a thriving standardization ecosystem. Initiatives aim to ensure representation of all stakeholder groups, fostering diversity and inclusion. These actions contribute to advancements in the field of AI and promote sustainable development across sectors.

Sonny

The AI Act introduced by the European Union aims to govern and assess AI systems, particularly high-risk ones. It sets out five principles and establishes seven essential requirements for these systems. The act underscores the need for collaboration and global standards to ensure fair and consistent AI governance. By adhering to shared standards, stakeholders can operate on a level playing field.

The AI Standards Hub is a valuable resource that promotes global cooperation. It offers a comprehensive database of AI standards and policies, accessible worldwide. The hub facilitates collaboration among stakeholders, enabling them to align efforts and work towards common goals. Additionally, it provides e-learning materials to enhance understanding of AI standards.

Moreover, the AI Standards Hub strives to promote inclusive access to AI standards and policies. It encourages stakeholders from diverse backgrounds and industries to contribute and participate in standard development and implementation. This inclusive approach ensures comprehensive and effective AI governance.

The partnership between the AI Standards Hub and international organizations, such as the OECD, further demonstrates the significance of global cooperation in this field. By leveraging expertise and resources from like-minded institutions, the hub fosters a collective effort to tackle AI-related challenges and opportunities.

In summary, the EU AI Act and the AI Standards Hub emphasize the importance of collaboration, global standards, and inclusive access to AI standards and policies. By working together, stakeholders can establish a harmonized approach to AI governance, promoting ethical and responsible use of AI technologies across industries and regions.

Session transcript

Florian Ostmann:
Good morning, everyone. I think we’re going to start. I realize it’s an early start. And thank you very much for those of you who are in the room for making it so early to this session to start today with us. My name is Florian Ostmann. I’m the head of AI Governance and Regulatory Innovation at the Alan Turing Institute, which is the UK’s National Institute for Data Science and AI. And it’s a real pleasure to welcome you to this session today, which will be dedicated to thinking about AI standardization and the role that multi-stakeholder participation and international cooperation have to play to make AI standards a success. There’s been quite a lot of discussion, of course, across many different sessions around AI over the last few days, including on AI governance in many different ways. And standards has come up in quite a few different contexts. But I don’t believe there has been a full session dedicated to standards in the sense that we will be looking at today, which is standards developed by formally recognized standards development organizations. We’ll tell you a bit more about what we mean by that in a moment. And so we’re really excited about the opportunity to dive deeper into this particular topic, into the role that standards as a specific governance tool can play to ensure the responsible use and development of AI. And to do so in particular in relation to the principles that are at the core of IGF in terms of multi-stakeholder participation and international cooperation. I’ll say a few words about the structure of the session. We will begin with a presentation about an initiative that we launched in the UK just about a year ago. That initiative is called the AI Standards Hub. Some of you may have heard about it before. It’s a partnership between the Alan Turing Institute, the British Standards Institution, and the National Physical Laboratory in the UK, working very closely with. the UK government. And it’s an initiative dedicated to awareness raising and increasing participation capacity building around AI standardization. So we’ll tell you a bit about how we set up the initiative, what the mission is, and also our plans and sort of interest to collaborate internationally with like-minded partners around these topics. And we’ll then move on to a panel discussion. We’ve got four terrific speakers with us today from different regions of the world to join us on a reflect on these themes of multi stakeholder participation and international cooperation in AI standards. And then we’ll make sure to reserve some time at the end for sort of your participation, your thoughts, and questions that you may have. We will be using Mentimeter later on as an interactive exercise, but we will get to that later on. We’ll share the link for that when we get to it. And please do feel free, you know, throughout the session to use the chat function or the Q&A function to share any questions. We will monitor the chat and we will try our best to work any questions into the session as we move along. So with all of that said, we will start with the presentation and for that I’m joined by two colleagues by Mathilda Road, who is the AI and cyber security sector lead at the British Standards Institution, which is the UK’s national standards body, and Sandeep Bhandari, head of digital innovation at the National Physical Laboratory, which is the UK’s National Measurement Institute or Metrology Institute. So I’ll pass over to them and then I’ll come back later.

Matilda Road:
Mathilda, over to you. Thank you, Florian. Good morning everyone. It’s great to see so many of you here and thank you to those of us who are joining us online as well. So the AI Standards Hub, as Florian has already introduced, has got two key missions. And the first is advancing the use of responsible AI, and that’s by unlocking some of the particular benefits of standards as a governance mechanism. As Florian mentioned, this week we’ve heard a lot about regulation for AI, guidelines and frameworks, but in this session we’re specifically focusing on standards which is distinct from these other regulatory mechanisms in the sense that standards are voluntary codes of conduct of representing best practice. And the second mission of the Standards Hub is to empower stakeholders to become actively involved in the international AI standardization landscape, including participation in the development of standards and the informed use of published standards. If you’ve attended any other sessions this week on how we can look at responsible AI practices, you might find the landscape slightly overwhelming and the AI Standards Hub can hopefully be a tool to help navigate that space. Is anyone in the room involved in the development of standards in any way? Just out of interest? No, okay, great. So there are several organizations behind the AI Standards Hub. We’ve heard again this week, I’m sure you’ve been to other sessions on the use of responsible AI calls for tracing the data that’s used in models, finding weaknesses, making sure that they’re reliable and not giving us untrustworthy results. Many of these questions are actually still open research problems, and that’s one of the reasons that the Hub brings together several organizations with different strengths. So the three that we’re here representing today that make up the Hub are the Alan Turing Institute, which is the national institute for. Data Science and AI, so it’s an academic research organization. BSI, British Standards Institute, which is the national standards body that represents the UK at ISO, the International Standards Organization. And the National Physical Laboratory, NPL, which is the National Metrology or Measurement, not Weather Institute, which produces technical measurement standards. And these feed into the overall standards themselves. And this initiative has been supported by the UK government’s Department for Science, Innovation, and Technology. So international standards are governance tools which are developed by various standards organizations, some of which we’ve listed on the slide here. And if you aren’t aware of the standards development landscape in AI, which hopefully by the end of this session you will be more informed on that topic, you might have come across some of the most famous ISO standards, such as 27,001 series on cybersecurity and 9,001 on quality management. And there is now a rapidly growing landscape of standards for AI. So we’re anticipating the first ISO standard on AI to be published at the end of this year or perhaps beginning of next year. And there are many others in development, including on the use of sustainable AI, mitigation of bias in AI, and a very interesting standard to be published, hopefully next spring or summer, 42006 on audit practices for AI, which will also be very interesting for compliance with the EU AI Act. So why standardization for AI versus, for example, regulation or framework? So regulation is obviously something that’s supported by legal framework and organization. organizations are required to comply, whereas standards are voluntary codes of best practice. But why would companies bother to adhere to these voluntary codes? Well, as you might have heard from some of the large organizations developing AI models this week, they’ve been developing their own internal codes of best practice, but each one of these are slightly different. If we can develop a standardized way of doing this, we can provide quality and safety assurance, in-build other goals like environmental targets or UN Sustainable Development Goals. They can be used for ethical development, knowledge and technology transfer, and to provide interoperability between products. Ultimately, standards can help build trust between organizations and their consumers, and also along the supply chain, both in the supply chain that an organization is feeding into and the organizations that are feeding into your own supply chain. These can also provide market access by complying with certain trade requirements. They link into other government mechanisms, and can also be used as a kind of pathway towards regulation, as they are indeed in certain sectors, particularly for things like medical devices, for example. So, just because we had a response in the room, to standards development, I hope that this is relevant information, that standards are voluntary for organizations to comply with. They’re developed by committees, so they’re not developed by standards bodies, and unlike regulation, which is developed by regulators, they’re developed by experts in this area, who are volunteers on a standards committee, and they’re also developed by consensus, two-thirds consensus, in case you’re interested. There are also quite a lot of standards. 3,000 standards, roughly, are produced every year by BSI, British Standards Institute, alone, and again, I hope that the AI Standards Hub will be a useful tool for. those of you who are looking to navigate this space with regards to AI. So not just the horizontal AI standards, which are general standards related to AI, but also the ones that are sector specific, because we have specific requirements in certain sectors. Because it’s early in the morning, I thought it would be fun to do a quiz. And I wondered if anybody, if these things on the board mean anything to anyone in the room. Don’t be shy. Okay, good. Yeah. This again, is a kind of indicator of the fact that there can be quite an overwhelming number of acronyms and numbers in the standards landscape, which once you become familiar with using them, you find yourself using them all the time, but can make it quite impenetrable in the first place. So 42001 is the standard that we’re expecting to be published at the end of the year. It’s currently in FDIS stage, which is final draft international standard. It means it’s only out for editorial comments. And as long as there aren’t too many of them, we’re expecting it to be published in December or January. So this will be the first international standard published on AI. And it’s an AI risk management system standard. I already mentioned 42006 on audit, which we expect next spring. JTC1 is the joint technical committee one, which is the parent committee of subcommittee 42. I can keep going on with these numbers. That actually developed 42001. And then just showing how this maps down to the national level, ART1 is the relevant AI standards development committee within BSI. And in case you’re interested on the ISO website, there’s a lot of information about how many and which countries are involved in each committee in the development of the standards. So you can dig into that data. And with that, I’m going to hand back to Florian to tell you more about the hub.

Florian Ostmann:
Thank you, Matilda. So with that set out in terms of what kinds of standards we are focused on and why we think those standards are important, let me tell you a bit more about the relationship between the standards hub and the UK’s policy thinking on AI, and then go into more detail on how we developed our strategy and the kinds of challenges that we’re trying to address with the hub. So in terms of the policy context, and Nikita, who’s joining us on the panel discussion will go into more detail later on. The main thing to mention is that the UK government has, over the last few years, gone through a process of thinking about the regulation of AI, but also more broadly, the regulation of digital technologies in general, and throughout different pieces of policy work, policy papers, and policy statement, there’s been a recognition of the role of standards as a governance tool for the reasons that Matilda mentioned, sort of the way in which standards are developed, the fact that they are open to input from all relevant stakeholder groups, the fact that they can be useful to support regulation in various ways, or also to fill regulatory gaps where regulation doesn’t exist. So Nikita will say more about this, but essentially the hub is a deliverable that was highlighted in the National AI Strategy that the UK government published in September 2021, and also plays an important role in the context of the recently published, that was about half a year ago in March this year, the recently published AI Regulation White Paper. Now, the AI Regulation White Paper, at a very high level, just a few words about that, sets out a context-specific. So, what is the White Paper? The White Paper is a framework that is based on a very specific risk-based and decentralized regulatory approach. So, what that means in practice is that it’s based on the view that existing regulatory bodies are best placed to think about the implications of AI in the relevant regulatory remits. And in order to sort of encourage and enable regulatory regulators to think about the implementation of AI, the White Paper sets out five principles. So, the White Paper is fairly familiar to anyone, you know, familiar with AI governance. They resonate very closely with the OECD AI principles, for example. So, the White Paper sets out these five principles and then essentially sort of puts the task to regulators to think about the implementation of these principles in their remits and emphasizes the role of the regulators in the implementation of these principles. So, in a sense, there’s an important link between the objectives of the regulatory approach and the role of standards in the sense that standards are seen as facilitating the implementation of principles, providing the detail that is needed to make those principles meaningful in a given context, in a given regulatory remit. So, the White Paper sets out five principles and then sets out the five stakeholders that we’re trying to address with the hub. As Matilda mentioned, standards in the organizations that we are focused on are developed through a process that is open to all stakeholder groups, and we know that in the AI space, there are lots of different stakeholder groups, you know, whose interests are affected or whose views are relevant to the development of standards, and that includes, of course, different actors in the AI space, such as populations, institutions, contexts, many different possibilities that are out there. Otherwise, those are exposed participants outside of industry, and includes, importantly, civil โ€“ and consumer perspectives, and it also includes regulators and academic researchers. And while the standards development process is open to all of those groups, we know historically that not all of these groups are equally strongly represented in those processes. And so to give some examples, civil society voices we know are less strongly represented compared to other voices, compared to industry, for example. And within industry, SMEs and startups, for example, are less strongly represented compared to larger companies. So at the core of the mission of the AI Standards Hub and the reason for setting it up isn’t just the recognition of the importance of AI standards, but it’s also the recognition that in order for AI standards to be effective and fit for purpose, it’s really important that all of those stakeholder perspectives are included in the development of standards. And what we’re trying to do with the Hub is to help all of those stakeholder groups, and especially those who have less experience in the space, to develop the knowledge, develop the skills and the understanding, and also perhaps the coordination that’s needed to achieve that involvement. In terms of what sort of the key groups are, I think I mentioned them already. So in the private sector, it includes larger and smaller companies, includes civil society, consumer groups, regulatory bodies, academia, and then, of course, there are people who are already actively involved in standards committees. Those are also key because they can, of course, play an important role in guiding others and sharing information about that work. We did a fair amount of stakeholder engagement leading up to the launch of the Hub. So we were very mindful of making sure that we develop an initiative and develop a shape for the initiative that meets actual needs, rather than just developing something in the abstract for which there isn’t a need. And so we did several engagement roundtables and surveys. with each relevant stakeholder groups and you know one of the things we tested at the outset of course was you know is there a recognition of the importance of standards and what’s the current level of awareness and engagement in the space and as this slide shows you know there’s more detailed data but just at a high level to give you a sense it’s really that across all groups you know there was a strong recognition that standards are going to be key for AI governance and there is significant thinking in each group about AI standardization but there is a clear gap as you can see between the perceived importance of the topic and the extent of current thinking and that sort of you know awareness gap and to some extent also capability gap in thinking about standards is what we’re trying to address. We then try to dig a bit deeper and sort of you know explored with stakeholder groups what are the challenges what’s holding you back you know what explains that gap what’s holding you back in engaging with AI standardization and at a high level there were sort of four key areas that came out of that part of the engagement. The first one is a perceived lack of easily accessible information around AI standards that includes understanding keeping track of which standards are being developed and published but then also identifying those standards that are most relevant to a given user or stakeholder. Secondly the skills needed to contribute to standards development or engage with standards once they are published so there you know there’s a strong sense that both the process of development developing standards can be quite complex and you need knowledge and skills to navigate the process but then of course also knowledge about what does best practice for AI look like you know what does a good standard look like what should I be contributing if I am on a committee and contribute to drafting a standard. So skills the second area. Thirdly, securing organizational buy-in for engagement. So we know that engagement with standards development can be time-consuming. How do I convince my organization that that’s a worthwhile thing to do, given that they’re computing resource priorities? And that’s sort of relevant, of course, especially for those types of organizations who are historically less involved in this space. And then fourthly, a need for analysis and strategic direction. Given the fact, and I’ll say more about this in a moment, that there is such a vast number of AI-related standards already being developed, which are the areas that are most important? Are there gaps that need to be addressed, standards that are missing? And a need for strategic direction in shaping AI standardization. So those were the challenges. And we then, in shaping the strategy, essentially translated those challenges in four different pillars of activity that the hub is pursuing. The first pillar is what we’re calling the observatory. That can be found on our website, and that consists of two databases. One is a database on AI standards, and the other one is a database on AI-related policies from around the world. Community collaboration is around organizing events, virtual events, in-person events, to engage the community and bring stakeholders together around conversations to gather input into standards that are under development to identify priorities and needs and so on. Knowledge and training, that’s where we’ve developed a suite of e-learning materials that can be found on our website, but we’re also offering in-person training events, virtual ones and in-person ones. And then, fourthly, research and analysis. That’s sort of a more traditional research function where the hub pursues research to develop insights to address these challenges. needs for strategic direction and analysis. I would like to say just a bit more about each pillar and in particular the observatory and within the observatory the AI standards database because that’s in a way the resource that you know sort of took the most thinking in terms of how we develop it and and how it should be designed. So the observatory for AI standards is a database on our website that tracks both standards under development and standards that have already been published for AI. You can see a breakdown on the slide for how these standards are distributed across different categories. The key thing here is that it’s a really a large number already so over 300 relevant standards that are captured in the database, a large number of standards that are already published and what sort of was key in designing the database is to make it easier to navigate that vast number of standards. So we’ve developed a range of different filter categories, a search function and so on. We also have interactive features, it’s possible to follow a standard in which case you get updates when the standard moves from one development phase to the next for example. You can let other community members know if you have been involved in the development of standards so they can reach out to you and try to find out more and then there is a discussion thread and also opportunity to leave reviews for a standard that you have may have used or that you may have been involved with. In terms of the other pillars I’ll keep this very brief but you will find more information on all of this on our website. So on the community collaboration pillar over the last 12 months we had a series of events. Those are to a large extent recorded and you will be able to find recordings on our website. page if you’re interested. Some of that was focused on transparent and explainable AI as a specific topic, others was more generally focused on trustworthy AI. There was targeted engagement with certification bodies and then we also have a standing forum for UK regulators where regulators have a space to come together among themselves as a single stakeholder group to exchange knowledge around the role that standards can play in AI regulation. For knowledge and training, as I mentioned, that includes various e-learning materials. There’s sort of a snapshot of some examples on this slide. If you’re interested, we’d like to invite you to take a look at that on the website and the same is the case for research and analysis. This is just a snapshot of some of the most recent pieces but more of, you know, more of that and more details you’ll be able to find on the website. That concludes the sort of the summary of what we have been up to so far and why we set up the AI Standards Hub and I’ll now pass on to Sonny to tell us more about our objectives and our interest to collaborate internationally.

Sonny:
Brilliant, thank you Florian and good morning to everybody in the room and good afternoon and good evening to those online as well. So my name is Sonny and I’m from the National Physical Laboratory and part of this amazing collaboration that we’ve set up and I’ll talk a bit more about what the collaboration is and what our aspirations are and why we have those objectives and aspirations. So we’re seeing, we’ve heard a lot over the last few days about the growing need for standards to help with governance, with assessment and heard about many different challenges. So on the screen you can see several different initiatives, development of policies and strategies, but actually even just yesterday I heard about some work going on in Africa where across the continent there are at least 25 initiatives and around 466 different policies in development. And so, we’ve got all of this environment out there in the world where lots of countries, lots of regional organizations are working to do all of this work. And we’ve really seen that the world recognizes the importance of standards. So, if we draw out just one of those examples with the recently published EU AI Act, we can see that the support, the development, the creation of the conformity to standards, to do that, most nations have something called a quality infrastructure, which tends to be built up of a few different organizations, which include organizations such as myself, which is the Technical Measurement Standards, and then the National Standards Body, such as the British Standards Institute, and then other organizations that actually check the conformity and compliance, as well as accredit organizations. So, our hub is an example of how bringing these can be a valuable exercise in itself, because it helps with a diverse set of skills, capacity building, as well as looking at the entire ecosystem, all that value chain in the whole together at the same time. But how do we lift that from a national paradigm to an international paradigm? These standards have to be worked in by consensus. And we all have shared challenges, and globally, we’re all at various stages of our domestic journeys. So, how do we bring everyone around the world to the same level and work on our shared global challenges to truly realize the benefits of AI, as well as provide that confidence to us as normal people, as the public, to have in this technology and really benefit from it? So, here on the screen, you see some of the role of standards within the EU AI Act, where they’ve taken five principles, and then set out seven essential requirements within a high-risk system or high-risk systems. And so, the European Commission has requested CENTER-ELECT to now develop standards around 10 issues to really try and harmonize these standards, which then make this presumption of legal conformity. Now, as I said, these standards are generally voluntary, so how can we work on these together such that everyone is on that same level? level platform. So we really are trying to do this, and on the screen you just see three small examples of some of the things that we have in train. In addition, we have had much international interest, and really pleasing the reach out we’ve had from north, south, east, and west. And so we’re partners with the OECD, and we cross-reference with their tools and metrics for trustworthy AI, and they also cross-reference with the hub. We’re also doing a lot of work with NIST, and other like-minded organizations, put that into a bit of context, NIST is the American equivalent, or NPL is the British equivalent of each other, and there are 100 different organizations of this around the world, which are signatories to the 1875 METRE Convention. So there is already certain platforms to do this work. Now assessment, for example, is expressly a measurement activity. How can we all understand and make those measurements? How would you actually measure the trustworthiness of something quantitatively, also appreciating that AI is very context-specific, so now there is a new paradigm where we also have to think about qualitative assessment and measurements. And then another example we have here is where some of the work we’re working with other national standards bodies around the world, and in this case we’ve pulled out the bilateral work going on with Canada at the moment, and again, it’s not limited to just Canada, we’re working with many other countries. Next slide please, Florian. And so broadly, these are the kinds of things that people are asking us to think about and do. So how do we build these international stakeholder networks? There is a big challenge out there in the world that every region is lacking the skills, the resources, the people, the knowledge in these things, so how do we bring the right people together to share, to address these shared challenges? And so, as talked about several times already, it’s about bringing the national standards bodies together with the national measurement institutes, bringing the right academic prowess into the room, and most importantly, why are we doing this and who are we doing this for? We’ve been asked to help and work with others on clarity. collaborative research, and then also developing these shared resources, so lifting up from the national paradigm to the international paradigm. And so Florian’s already shown some screenshots of the platform. And what we’d really like to, I’d like to finish off this part is, this is not just a UK resource. This is available. Anybody can access this, so please come have a look, and if there’s anything there of interest and you would really like to work on shared challenges, then please get in touch. Thank you.

Florian Ostmann:
Great. Thank you, Sonny. So that brings us to the end of the presentation part of our session. As I mentioned earlier, we do have sort of an interactive exercise that we’ve prepared and that we’d like to come back to towards the end. We’ll do that using Mentimeter, and so before we move on to the panel discussion, I’d like to invite you all, both those of you in the room and those of you joining online, to take a moment to go to Mentimeter and then in your own time sort of complete the questions that you will see on your screen that come up. It’s not a big exercise, so don’t worry. It’s also worth mentioning that it’s completely anonymous, but I think there will be some interesting results that we can look at when we get to the discussion later on. So to get to Mentimeter, you can either go to menti.com and enter this code. My colleague Anna will also put the link for Mentimeter into the chat, so you can just click there, or you can try to scan the QR code if that works for you. So we’ll just take a moment, I’ll leave the slide on for a short moment, and the link is now in the chat as well, before we move on. Great, I think I’ll stop sharing the slide, but the link for the Mentimeter is in the chat, so I hope everyone will be able to access that there. Let me move on to introducing our panel. As I mentioned, we’re very excited to be joined by a great panel of experts today with a vast amount of experience in the AI standards space from across different regions of the world, and also Nikita Bangu, our colleague from the UK government, who will tell us a bit more about the context in the UK policy field. So I will stop sharing the slide, and I’d like to invite our panellists to turn on their cameras. Fantastic. Nikita is joining us here on the stage, so it’d be great if you could move the camera such that we are both visible. Nikita is sitting to the right of me. And I’ll just briefly introduce our panellists. So I’ll start with Nikita. Nikita Bangu is the Head of Digital Standards Policy in the UK government’s Department for Science, Innovation and Technology. She works in the Digital Standards team in the department, which brings together the UK government’s global engagement with key internet governance and digital standards bodies. And she works on Digital Standards Policy Portfolio, which includes standards policy on new and emerging technologies such as AI and other areas such as quantum technology. So welcome, Nikita. Thank you for joining us. Ashley Kosoban. Next on the panel is the Executive Director of the Responsible AI Institute, which is a multi-stakeholder non-profit dedicated to mitigating harm and unintended consequences of AI systems. Ashley has been at the forefront of building tools and policy interventions to support the responsible use and adoption of AI and other technologies. She’ll tell us more about that, including her important work and the Institute’s important work on certification. And previously, Ashley led the development of the first major AI-related government policy instrument in Canada, which is the Directive on Automated Decision-Making Systems. Welcome, Ashley, and thank you for taking the time to join us. Wansi Lee is the Director of Data-Driven Tech at Singapore’s Infocomm Media Development Authority. In the area of AI, Wansi’s responsibilities include driving Singapore’s approach to AI governance, growing the trustworthy AI ecosystem in Singapore, and collaborating with governments around the world to further the development of responsible AI. She is also responsible for encouraging greater use of emerging data technologies, such as privacy-enhancing tech, to enable more trusted data sharing in Singapore. Welcome, Wansi, and thank you for joining. And then, last but not least, we have Aurelie Jacquet, who is an independent consultant advising ASX 20 companies on the responsible implementation of AI. Aurelie also works as a principal research consultant at CSIRO Data61, which is part of Australia’s National Science Agency, and she leads global initiatives on the implementation of responsible AI in various areas. And one piece that’s particularly worth highlighting, which, again, we’ll hear more about, is Aurelie’s role in chairing Australia’s National Committee for Standardization for AI, which represents Australian views within ISO and the development of AI standards in ISO. Welcome, Aurelie, and thank you. Great, so with those introductions done, let’s move on to the first round of questions. And I would like to start with you, Nikita, from a sort of a UK perspective. I mentioned earlier, you know, at a very high level, what the relationship between the hub and the wider policy thinking and government has been and is. But it’d be great to hear from you a bit more about how policy thinking in DCIT relates to the hub. What are the ideas that led to the creation of the hub? And why does the UK government think that this is an important initiative?

Nikita Bhangu:
Sure, thank you, Florian, and good morning to all of those in the room and good afternoon and evening to those online as well. As Florian mentioned, I’m Nikita Bangu and I’m the UK government representative on the panel today. So, I mean, Florian, Mathilde and Sunny provided a great overview of what the AI Standards Hub does. I guess just to provide a bit more context from the UK government perspective and our policy thinking there, I’ll just run you through kind of our approach to standards and how we’ve embedded that into our AI policy governance as well. So, I guess to start with, it’s just to note that the UK government sees standards as many benefits in kind of AI standards and engaging in the standardization landscape. In our recent AI White Paper, which sets out our approach to regulating AI policy more broadly, we noted the importance that standards and other tools such as assurance techniques can play within the wider AI governance framework as well and can help sort of implement some of the approaches from the UK government’s approach on AI policy as well. The paper sort of recognizes that digital standards are not an end to themselves. They are a means of making the technology more accessible. technology work and really important to consider sort of the wider toolkit that we have within our regulation and governance approach to AI as well. Under the UK presidency of the G7 we also looked into digital standards with our G7 and like-minded colleagues as well and set up the collaboration on digital technical standards as well to kind of note the importance of working together within this space and recognizing the benefits that standards have within the wider AI policy regulatory framework. I think having said that in terms of the benefits of AI standards we also recognize that there it is a very complex space there are from speaking to our stakeholders and through sort of our research and collaboration with international partners we recognize that there are many barriers in place in participating in the AI standardization ecosystem so as UK government we were really keen to sort of work with our stakeholders on our international partners to reduce these benefits so that standards can be for all whether it’s from knowing what standards are and how to adopt them into your business to kind of encouraging that multi-stakeholder global approach to developing standards and providing all groups with the opportunities and toolkits they need to participate in this ecosystem as well. You would have heard a lot at the IGF today of the importance of collaboration and multi-stakeholder approach to digital technologies that’s exactly the same for standards which it’s quite difficult to do because as I mentioned it is quite complex many people have been many people who develop standards have been playing sort of in that game for many years so it’s there is kind of a need to sort of support our stakeholders there to sort of help get them in those organizations and really understanding what standards are so I guess through consulting with our stakeholders we identified the key challenges in the UK which Florian went through in the presentation just now, we kind of thought about how we can kind of intervene in that market to support our stakeholders in reducing those barriers to enable the benefits of AI standards to seep through. Some of our key aims for kind of setting up the AI standards hub and doing that work was to increase the adoption and awareness of standards to create clear synergies between AI governance and standards hence sort of our work with the AI white paper and setting out the role for AI standards as a tool for trustworthy AI and also to provide practical tools for stakeholders to understand and engage in the standards ecosystem. So that really was our thinking behind setting up the AI standards hub and sort of working with our key experts in the field bringing together parts of the UK national quality infrastructure, British Standards Institute, a national physical laboratory and our national AI centres to bring the minds together so that we can reach a wide user base in the UK and beyond to help facilitate the reduction of barriers we’ve seen in this place. The AI standards hub has been running for a year now. I think next week it’s the first birthday of the AI standards hub which is great and we’ve seen lots of success in this space over the past year. We’re looking to increase our international collaboration with the AI standards hub in the coming years. I’m really keen to follow up on this conference and participate with you more in that space as well. I think the last thing I would just note is that we, UK government commissioned an independent evaluation of the pilot phase of the hub which was the first six months of the hub to just to understand sort of what’s worked well. and how we can continue growing. We will be publishing that evaluation, so it will be on our gov.uk website, so accessible for all to look at. But some early findings really indicate that the hub has helped support the UK community in understanding what AI standards are. We conducted a survey and found 70% of respondents kind of noted that the hub is really helping in building that knowledge gap, and sort of inspiring and motivating them to get more involved in the development organizations, which is great to see. I’ll stop there.

Florian Ostmann:
Great, thank you very much, Nikita, for adding that context, and yeah, it’s exciting that we’re approaching the one-year anniversary. Thanks for mentioning that. Great, so we’ll now move on sort of to the international perspectives, and I didn’t make it explicit earlier, so the great thing about the panel is that we’ll have sort of perspectives from Canada and the US, so North America, that’s Ashley’s focus, and then Wansi from Singapore, and Aurelie’s experience in Australia. It’ll be great to hear how some of the themes that we shared resonate with your experiences in those countries. So I essentially would like, as a first round, to ask each of you roughly the same question, which is how does what we’ve presented so far, and of course, you’ve heard about the AI Standards Hub previously, the challenges that we’re trying to address, the kind of initiative that we’ve built, how does that resonate with what you see in terms of AI standardization priorities and challenges in your countries? I know that in some cases, there are initiatives that are quite similar, or comparable in nature, at least overlapping, and perhaps we can start with you, Ashley. One such initiative is the Data and AI Standards Collaborative that you are heavily involved in. So it’d be great to hear a bit more about that, and also more generally, your reflections on this space.

Ashley Casovan:
Yes, thank you so much, and thanks for having us here to present about the work that we’re doing, and also, I think, just establishing this really important conversation related to AI standards. I think it’s, as you’ve mentioned, becoming a more important discussion, or at least one that more people are reflecting on, given the connection to different types of regulations. However, it still seems to be a very confusing topic, because standards can mean so many different things. Ironically, standards are not standard, and so there’s a lot of different points in which, or entry points, I guess, into that conversation, and understanding why they’re being established for what purpose is something that we’re trying to reflect on in the Canadian context. As Florian’s mentioned, I am heavily involved in an initiative that’s been established by the Canadian government called the Data and AI Standards Collaborative, and I am the co-chair of that, representing civil society. And in this capacity, we’ve been really trying to understand the implications of AI systems, and the data that feeds into those, and really trying to bring together civil society, academia, and government agencies to reflect on what types of standards are needed, really, really similar in nature to what you’ve already heard from the Standards Hub. And one of the things that we’re quite interested in doing as part of this initiative, is trying to identify different types of specific use cases, again, kind of aligned to the pillars that Florian presented. on previously and understand the context specific standards that are required within the whole value chain of an AI system. And I guess what I’ll say in addition to that is maybe because I’m here to represent the North American piece, but I do not speak on behalf of NIST, but because it was mentioned earlier, we’re starting to see a lot of uptake in tools in the North American markets that is related to some of the work that is happening in these national government activities. So Florian earlier spoke about NIST’s AIRMF, the risk management framework. And what we’re starting to see through this initiative, OECD, et cetera, is the work through these multi-stakeholder forums to be able to establish good baseline initiatives for standards to be developed from. So that could be things like even just what does the life cycle of an AI system look like? What are the different types of definitions that we should be using for these systems and have some commonality amongst those? Because what we’d like to get into then more deep from a Canadian data and AI standards collaborative perspective is, as I said, back to those use cases, understanding what types of certifications, standards, mechanisms are required for both the evaluation of a quality management system that Nikita spoke to earlier that we’re seeing with 40-2001. And then what is needed at a product level, which is work that we’re doing at the Responsible AI Institute, which I’m sure I’ll speak to after. And then looking also to individual certification. And this is something that Aurelie, I’m sure, will address as it’s something that we’re working on. that she’s been quite interested in for a while in terms of what does individual training look like. And so when I mention these different types of standards that are needed, there’s really a breadth that we’re needing to look at. So I’ll leave it there and I’m just really happy to be here and have this discussion at an international forum like this.

Florian Ostmann:
Great, thank you very much Ashley and we’ll come back to some of those points later. Moving on to you Wansi, a similar question for you. How does sort of the points around the importance of standards but also the challenges, how does that resonate with your work and your experience in Singapore? And I believe there’s an initiative that’s quite relevant from your perspective that’s the Verify Initiative. It’d be great to hear a bit more about that and then your sort of views on standardization more broadly.

Wansi Lee:
Thanks Lauren. Hi everyone, I’m Wansi from Singapore. I’m from the Singapore government. Thanks for having me on the panel this morning. It’s really interesting to be able to talk about standards with like-minded folks from around the world. One of the things that we recognize in Singapore, for us in Singapore, it’s very important is the need for international cooperation. So that’s something when Sunny talked about just now is something that really resonated as well. So international cooperation, it can be done in various ways. Of course, Singapore is an active member in the ISO process. So we participate and we contribute and we vote and so on. But at the same time, we do not just be at the ISO kind of level of cooperation. We also work quite closely with countries and we also participate actively in the multilateral process. So maybe just as an example, since I actually spoke about NIST and that was also brought up in the earlier presentation, the NIST AIRMF coming from the U.S. and so many organizations that we’re looking at. And then for us, and what’s important then is how then do we, our own work in Singapore, map to or how do we work together with what NIST has already published. So we very actively started a mapping project with NIST. We developed a crosswalk where we sort of looked at what we’ve done in Singapore in terms of our guidelines for AI verify, model AI governance framework that we have published a couple of years ago. And then we did a mapping exercise then see where we are aligned and where we’re different. And of course, at this level, we’ve gone to some level of detail. Even at the level of details, there are many similarities and quite a lot of alignment. We find that this work is very helpful for organizations or companies that are operating internationally because they want to make sure that this, what they’re doing in terms of implementing the right practices and so on for responsible AI, it is aligned both to Singapore’s requirements as well as some of the standards work that’s happening in the U.S. So that’s why we started that process with NIST. And of course, then extending that, then we’re looking at other standards that are being developed through ISO, SENSE, and NALAC and so on, and to see how we can align as well. So that’s one example of how we can cooperate internationally and how we can make sure that there’s at least some kind of alignment or interoperability amongst the kind of guidelines and standards that have been developed. The other area that resonated is the need for multi-stakeholder engagement. Of course, there are platforms to do that. ISO is one platform. Our own Singapore Standards work is also another platform. But I thought, as Florian mentioned, I’ll highlight one of the things that we’re doing that’s a little bit different, just to show that there are many alternatives out there. So besides guidelines… requirements that the Singapore government sets out. We also wanted to make sure that organizations are able to demonstrate adherence or compliance to some of these guidelines, right? So we developed the AI Verify Testing Framework and Toolkit. So it’s a set of detailed requirements of how then you think about validating responsible AI practices or implementation of responsible AI requirements when organizations implement AI systems. So quite a lot of detailed process checks, for example, align again to international principles. We looked at requirements from around the world. We looked at principles from OECD and so on, and then we define that into a set of testing requirements. At the same time, we also identified how do we test, right? It’s not just about process checks, but also how do we actually test the system? So we developed a toolkit looking at some of the work that’s already been done by academics around the world, as well as some of the work that’s been done by companies and put together a toolkit to test for fairness, explainability, and robustness, because those are things that we think we can test at this stage. And that, but we also recognize that testing capability continues to evolve and there are many gaps. So people around the world are working on different aspects. That’s why then we decided for AI Verify Testing Framework and Toolkit to open-source it, that’s one, but not only to open-source it so that people could contribute, but also created a foundation, an open-source foundation to support the contribution and engagement of organizations, developers, individuals around the world to build up AI Verify toolkit and framework. Even as we look at generative AI, for example, that’s something then that needs to be extended. verify and that’s why we feel it’s important to work with the global community and the open source foundation is one way in which you can get multi-stakeholder involvement in technical development as well as sharing of knowledge and experiences in the space of AI governance testing. So that’s one kind of slightly different take on how we take on multi-stakeholder engagement approach. Thanks. I’ll just pause here for now.

Florian Ostmann:
Thanks. That’s great. Thank you, Vansi. And it would be great to come back later in the next round and go into a bit more detail both on sort of priorities for international collaboration and the multi-stakeholder involvement. But before going into more depth, let’s move on to you, Aurelie, sort of for your general take on this topic. You, of course, have a lot of hands-on experience, you know, probably the most hands-on experience in relation to standards given your role as the Australian committee chair. So it would simply be great to hear from your vantage point what’s your take on AI standardisation both in terms of importance, challenges and sort of the international cooperation and multi-stakeholder angle.

Aurelie Jacquet :
Thank you, Florian. And again, delighted to be here in this forum and talking about standards and certifications. This is my favourite topic. To your point, I’d like to maybe go back in time and remind that actually back in 2017 already, the UN published, there was a few papers, academic papers that were published on standards at the time explaining how they can be used as an agile tool for international governance. And so now that the standards are mature, and we see a lot more published, there’s increased interest. From my perspective, I actually led Australia active participation in the standards. So my motivation was actually, I come from the global markets, financial services, and I saw the mini crash that had happened. We had to, we had an onset of regulation that came after the GFC. And at the time, from a compliance perspective, the thought was we really need some set of best practice that we can provide to industry in order to ensure that the onsets of regulation is industry informed. So that was a strong motivation for us to actually make the submission to standards and ISOs for Australia to actively participate and shape the international standard on AI. So that was our entry into that world back in 2018. And as I said, the core business case is Australia is a small country and it really needs to actively participate in a topic and in the involvement of best practice that for AI that is effectively an international, that’s got an international remit. So we had a roadmap already in 2020 for AI standards that I’d already focused on 40, 2001. You’ll hear that number a lot from me. That’s the AI management system standard. And this is what we described as the crown of the journey. standards because it provides for the certification of AI systems. So this was one key part of our roadmap. Obviously, also as part of the work that I do with CSIRO, Data61 and National AI Centre in Australia, there was one challenge is often standards, they are embedded everywhere in our life, but they’re not visible and often organisations are not aware of them. So we started also in Australia through the National AI Centre and through the Responsible AI Network, which is a community of what we guess is best practice with a community of experts. That’s got seven pillars, including standards. We started to develop education program that also covers best practice, including AI standards. So the initial course that we developed were on what are standards and how they’re part of our daily life and how they’re relevant for AI. And of course, the AI management system standards, what’s coming, what’s likely to become the standards that will enable audit of those AI systems. With CSIRO, Data61, we’re also building tools, a set of tools that are leveraging standards work. And you see in a day-to-day as part of adoption in Australia of standards, we’ve already got the NSW AI Assurance Framework that actually is leveraging standards to provide assurance for AI system used by the government. This has been made mandatory for all public services. in New South Wales, if they’re using AI, they have to go through that AI assurance framework. And obviously, from a business perspective, there’s definitely some appeal, we see increased appeal in looking at standards that are starting to, that have effectively over 60 countries that are involved in developing them, but also that, let’s say, SENCENELEC and the EU Commission have been interested from the beginning. We had the EU Commission coming to our ISO meeting from 2018 onwards. So that’s why we see our governments already, even at the federal government, we had some guidance about Chad GPT and generative AI provided and that referenced some of our standards on bias and others. So there’s been a good uptake from that perspective in Australia. And I finish with international initiatives that we have initiated. Also, with Standard Australia, we have developed a workshop that we delivered at the APEC SOM, explaining how effectively AI standards can help, really, how standards can help scale AI and what has the benefit for organization in different state. But to your point, Florian, there’s still this challenge of getting the standards well known, so they’re much more visible and increase participation. But really, this is standards so far as proven, as I said from the beginning, as a very good agile tool for international governance.

Florian Ostmann:
Great. Thank you very much for that overview, Aurelie. And I think that was a really good segue to the next round of questions. So, you know, we ended on the challenges and more work that needs to be done, and I’d like to basically do two rounds, you know, on each of the topics for the session. So the first one on multi-stakeholder engagement and then the second one on international collaboration. Let’s start with multi-stakeholder involvement. So, of course, standardization is already, you know, compared to other governance mechanisms, a very, you know, inclusive mechanism, right? I mean, it’s, you know, contrasts with regulatory rulemaking, for example, in that the process is open to all stakeholder groups in principle. But we are aware, as we mentioned at the beginning, that not all groups are equally represented. So it would be great to hear from all of you, and we’ll start with Nikita. You know, what do you see as the main challenges? What are the main obstacles for achieving, you know, equitable involvement from all groups? And also, what are the most promising strategies for addressing those challenges? So what can be done, including what can be done collaboratively at the international level to ensure and increase stakeholder inclusion?

Nikita Bhangu:
Sure, thanks, Florian. So I’ll start with some of the main challenges. I mean, we covered it previously in terms of sort of what the UK sees as some of the challenges, but just to kind of point to a few of the key ones. For the UK in particular, it’s sort of that ensuring we have the right representation at relevant standards development organizations. We’re seeing quite a few large companies, for example, representing industry at standards development organization, which is, of course, great in terms of providing that view. However, for the UK, most of our technology companies are small to medium enterprises as well, who often are quite small companies and may not have a large regulatory team or standards expert. who have the skill set needed to engage effectively in the standards development organization. So that’s, we recognize that as a key challenge in terms of our small to medium enterprises as well. I think for us as well, another key stakeholder group is civil society, which has sort of always been a bit of a, quite challenging to get the resourcing and the expertise into standards development organizations and I guess Florian just mentioned the key point of the standards are for everyone and standards are at the sort of offset providing those building blocks for how technology will be developed. So it’s crucial that all stakeholder groups are taken in mind when developing these standards. I think another key challenge for UK government particularly is the issue of government is another key stakeholder and kind of expertise getting the state standards development organizations. For us, we have a very, very small technical team within our digital department back in London, which obviously their resources can only stretch so much. So getting those viewpoints and that coordination across sort of constrained resourcing is another challenge. One thing we’re doing in the UK government at the moment is thinking about talent pipeline as well. You know, trying to increase diversity presently but also in the future working with standards development organizations and other international partners to create sort of the next generation, I guess you can call it, of standards development organizations, developers as well. I know there’s a lot of work going on in this space already. I think BSI do quite, our national standards body do quite a lot in that space. And the IEC has a young professional program as well to sort of, again, provide that career route and continuation of skill sets into the standards organization as well. One thing particularly relevant for the IGF is that we’re also working with the MAG, the multi-advisory group, to sort of embed digital standards within that thinking as well, so again using international fora to promote that view of multi-stakeholder and the tools that we can develop together to get different voices in standards development organizations as well.

Florian Ostmann:
Great, thank you Nikita. Ashley, over to you. How does that resonate with you in terms of, you know, your views on obstacles and also solutions for ensuring inclusion and participation of stakeholders?

Ashley Casovan:
Yeah, I think all of that resonates here as well. I think one of the challenges that we’re actually having with the Data and AI Standards Collaborative is that we’re trying to be incredibly inclusive and so to some of the points that Nikita was just making, the bandwidth for the teams that are actually within government that are trying to process and analyze all of that information does become constrained and so it’s, I think, why I spent so much time in my previous discussion talking about the need for us to really understand what types of standards are we talking about because then we can identify who needs to be at the table for which types of conversations. To have broad-based discussions about all types of AI in all types of contexts makes it really difficult to get the right stakeholders there. One very significant effort that we’re trying to make is to ensure inclusion across all aspects of civil society and so something that’s been missing from a lot of our conversations is Indigenous groups in the Canadian context and so we’re making a concerted effort to ensure that voices from the most impacted populations in Canada are being not only brought in but again really understand in the harms that can come from these AI systems to try and find appropriate ways that standards can help to mitigate those.

Wansi Lee:
Great, thank you Ashley. And over to you, Wansi, for your views on stakeholder participation. Yeah, it’s definitely a very complex space. Singapore is a small country, smallest here I think amongst everybody on the panel, and we have also limited resources. One of the things that we need to do is then make sure that we focus our resources in areas where we can contribute to the global conversation, because there’s lots going on in the standard space. And so we want to make sure that what we do makes sense in the grand scheme of things. And that’s why we are very targeted in terms of where we want to develop and spend effort, because a lot of the things that’s already happening internationally, we could adopt. And where we think there are gaps, then we want to make sure that we have to plant. And that’s why we look at actually tooling, testing as an emphasis in terms of where we want to put our resources. That’s not to say that other areas are not important, it’s just where we think, oh, there’s a gap and this is where Singapore can help. And that’s how we started AI Verify. In terms of getting more involvement, I think we definitely are very active in making sure that what we do is not just a government kind of perspective. We are very active in engaging industry, companies, large and small companies. operate globally, that operate domestically in Singapore, to make sure that their voices or their input can be incorporated. Everyone or all organizations can participate in let’s say the AI Verify Foundation, open source anyway. We’re trying to make that mechanism for any organization that’s interested, even if you’re very small, not from Singapore, it is a platform that you can you can contribute on. And then from there, then we take some of the work that’s being done at AI Verify Foundation, rationalize it at the national level in Singapore, and then we see how we can then support that more globally across in other platforms, whether it’s OECD, GPA, or ISO, or other multilateral platforms. Thanks.

Florian Ostmann:
Great, thank you, Vansi. And over to you, Aurelie, for your take on stakeholder inclusion.

Aurelie Jacquet :
Thank you, Florian. So, Australia has a small yet powerful delegation. So, if you have a small delegation, that should not stop you from being involved in the standards. Most of the experts were new to standards, so it took a little bit of adaptation. When we got started back in 2018, one thing I’d like to highlight is I say, we worked with other small countries to ensure that the mechanisms that are in place actually fitting for our size. And when you have many experts, you cannot have them in all the different meetings at all times. So, we’ve worked very closely with others to make this process manageable. Australia is actually leading a great way to look at the key element that we have in Australia and how we want to lead them overseas. Of course, we have the resource challenge and the time challenge. From a resource perspective, we’re very lucky to help with an organisation that is a non-for-profit or smaller business. We have help from the government that just allows us to participate and travel as volunteers and attend the ISO plenary that’s coming actually in Vienna next week. One challenge also that we’ve been working very closely with Saira and the National AI Centre is really if you have not participated in the development of those standards, sometimes it’s hard to get the context around those documents when they’re written. Our experts have worked really hard to start developing some white papers on giving the background between 42001 or some of the bias standards or the sustainability standards that we are developing and how they’re building into practice. One challenge remains, obviously, it’s for SMEs, the uptake standards are often uptake by broader organisations, so how do we make this more fit for purpose for SME? How do we make it easier accessible for SME? That’s conversations that are ongoing and on which we are working very closely.

Florian Ostmann:
Great. Thank you, Aurรฉlie. Now, we’re already approaching sort of almost getting close to the hour, so we don’t have much time left. There’s lots more that I’d like to ask, but I also would like to make sure that we get a chance. So, I think we’ll briefly pause the panel and see who might like to come in. I think there’s one contribution in the back and also Holly in the front, so if the two of you would like to come in and then if anyone online would like to come in, you will be able to actually speak, so please do raise your hand if you’d like to contribute. But, yeah, please go ahead.

Audience:
Thank you. My name is Walton Atwes. I’m the coordinator of the Internet Standards, Security and Safety Coalition here at the IGF, Dynamic Coalition. I’d like to make two comments. What I notice is that what we’re talking about here are all more or less government-accepted standards institutions like ISO, SENELEC, et cetera. What I’ve noticed in the research that we’ve been doing on internet standards is that in the technical community, quite often, all sort of standards are made as well, and we found that they’re almost 100% not accepted in government policies. So if that, I don’t know, but if that is the case with AI as well, then you have two separate bodies creating standards, which one may be official at some point and the other ones who make the internet run and AI run on the internet are not addressed in any way. So my suggestion would be to reach out to the technical community and see what is being done in the IETF or IEEE, et cetera. My second comment is more strategic a little bit. I hear these fantastic initiatives that you’re presenting, and we have probably had 19 other AI sessions here at the IGF. So what is going to come out of this session? Ideally, it would have been some sort of, we can’t call it the declaration in the IGF context, I know, but what you’re doing should be the main message coming out of the AI track here at the IGF. at the IGF, and probably now we all go home with a little report somewhere stuck on a fairly obscure website. So when you talk about the MAG, perhaps if you want to influence it, that next year there will be some sort of a declaration on this. Because what you’re presenting here is the future. And it’s a shame if we go home without the world hearing about it. So thank you.

Florian Ostmann:
Thank you very much for that. Thanks for the encouraging words in your second comment. To the first comment, just to briefly say, I think the point you raised is a really important one. And we’re focused in the presentation on the organizations that we mentioned, but we very much are aware of the wider landscape, including standards developed in ITF and others. And so it’s really part of the mission of what we’re trying to achieve is to make those connections and provide the full picture. So thanks for bringing that in.

Audience:
Holly, please. Hi. I’m Holly Hamblett with Consumers International. We’re a membership organization of consumer groups around the world. And I want to start by saying, I think this is a really great initiative. I think it’s going to be really helpful to have that multi-stakeholder approach, and it’s really vital to get consumer organizations and larger civil society involved in these processes. But I wanted to briefly comment on just the value of consumer organizations joining the AI standards hub, what we can bring, and then commenting on what the AI standards hub will give to us and how it will be helpful. So the value of consumer groups and Consumers International, especially, is that we can play this role in ensuring that AI is developed ethically and responsibly because we represent the interests of consumers who are the end users of the products and services. And we bring this unique perspective that a lot of consumer organizations are complaint mechanisms. for consumers, so they have direct insight into how they’re using the products and services, how it’s impacting them. They do a lot of product and service testing themselves, so they have information on whether it’s compliant with consumer protection regulations that are existing, whether it needs to be enhanced in some way with standards. So what I’m saying here is that consumer organizations have a lot of data that can help standards be supported with evidence and make sure that it is reflective of consumer interests. And the things that consumer organizations can bring to this space, we have those insights to make sure that standards are grounded in ground-level realities to reflect how the technology will impact consumers. We can bring a global perspective, not just Consumers International, but our whole membership base. We have around 200 consumer organizations in around 100 countries. This is very, very global, very diverse and representative, and bringing in these voices is absolutely vital. We can help ensure that standards are designed to protect consumer interests from the outset. It’s a huge problem with regulation, standards, policies that consumer interests are brought in at the end, and they’re an afterthought a lot of the time, which leads to further harm for consumers. But bringing them into the discussions to begin with is a really great way to make sure that not only is everything compliant with existing regulation, but that it is sustainable in the long run, because we can consider what impact it will have on consumers, mitigate the risks, and ensure that everyone enjoys the benefits. We can provide feedback on draft standards to make sure they’re clear, concise, and easy to implement. This isn’t just generally to businesses, governments, anyone that it applies to, but this is to consumers themselves. Consumers are aware of the standards. They’re able to exercise their consumer rights, they’re able to engage with technology a lot better. So it’s really good to make sure that they are translated into very consumer-friendly language. And that’s something that consumer organizations can absolutely help with. And then final way that we can help is to promote the adoption of standards by consumer organizations, businesses, governments. We are fairly connected in who we work with, and it’s a big benefit of working with consumer organizations that we’re able to say this is consumer-friendly, we support this. And it can help push that forward as a standard. The AI standards hub for us is going to be incredibly helpful. Florian mentioned absolutely in the PowerPoint that there are two very sizable challenges that consumer organizations face, or civil society generally. One being that we are not often welcome in the spaces, it’s very difficult to get into the standardization process. This is largely due to the fact that the process is dominated by industry experts or technical representatives, and civil society isn’t generally there. Which then leads to the consumer interest being the afterthought, which is something that absolutely needs to be avoided. And then secondly is the capacity building. Some of our organization’s members are wonderful in the digital space, they’re very, very clued up on it. Other members are experts in consumer protection and consumer protection only. It’s very difficult for consumer organizations, traditionally being underfunded, not very well resourced, and not experts in everything, to then try and cover the vast scope of all digital issues. particularly complex emerging technologies like AI. So something like the community and capacity building of the hub is gonna be beyond helpful. This isn’t something that we offer our members, so it’s gonna be helpful to us as an organization and to our members through that as well to make sure that they can contribute not only to our work but to work globally, internationally, make sure that there’s the space and the capacity there to be able to do that. And then I’ll end on one final note because I know I’m taking up a lot of time here, but it’s very important to consider that consumer organizations are not a monolithic group. We represent a diverse range of views and interests and it’s important to ensure that there’s broad representation of all consumer voices and AI standardization. And one way to make this easier is for the consumers themselves to understand the process, to contribute to it and to know what is going on and how they can be a part of this. So we need to develop these user-friendly tools, we need to have the resources and help consumers learn about AI, AI standards and provide their feedback consistently. Thanks.

Florian Ostmann:
Great, thank you very much for that. And we’ll be very interested to explore with you how we can work together, address those challenges. And it’s particularly great to hear about and consider your role as an international organization that brings together consumer organizations from around the world. Now, we’ve almost run out of time. I’d like to use the last couple of minutes, if we can, for a short, very quick round across the panel and invite each of you to share your final reflections. And perhaps in particular, if you have any points, maybe your top three priorities for international collaboration, if you bring it back to that theme and also going back to the earlier question or the comment to encourage us to think about tangible,

Nikita Bhangu:
sort of tangible. outcomes following kind of discussions and collaborations in this space, really emphasizing sort of research on standards and UK research, but also working with international partners to understand the broad issues that we’ve discussed today as well. Thanks.

Florian Ostmann:
Thank you, Nikita. Ashley, over to you.

Ashley Casovan:
Thanks. I’ll keep it short. I think that understanding what’s already happening in the space so that we’re not reinventing in any country is really important. And so an international exercise, whether that’s through OECD or another forum like IGF to do almost a mapping of the standards that are taking place where so that we can look to understand not only what’s being done, but then what types of harmonization efforts are required is something that I’d really love to be able to see. And then, again, I can’t stress this enough, AI is not one monolithic thing. And so really starting to break down the different types of uses and then therefore harms that are attributed from these systems and those specific uses, and then getting the right people, right stakeholders around the table to be able to have those dialogues that recognizing AI crosses or transcends borders, I think is going to be important dialogues for us to have in the years to come.

Florian Ostmann:
Great. Thank you, Ashley. And, Wansi, over to you.

Wansi Lee:
Thanks. I’ll also keep it short. I think for us, it’s really important that there’s no fragmentation of AI standards and AI regulations. So we have been working very hard over the last few years, and we continue to do that to partner with countries as well as actively multilateral platforms to try and hopefully drive towards, at least work together towards some kind of harmonized or aligned or interoperable standards for AI. I mean, we’re starting now. see a lot of countries coming up their own, you know, sort of requirements. In Singapore we’re doing it both within our region, in ASEAN we support the development of a consistent ASEAN guide for AI, a responsible AI implementation, but at the same time also then beyond ASEAN then we’re also active globally. Thanks.

Florian Ostmann:
Great, thank you. Aurelie.

Aurelie Jacquet :
Thank you. So, following on Wansi’s point, I think what’s important to know is it’s actually good to see different practices and different regulatory initiatives. What standards do is interoperability, that’s why we are doing standards, that’s why we are involving standards, it’s because of interoperability. It’s not about unification, it’s about harmonization. So, the key point that we made in some of our workshops at the APEC SOM, it’s allowing to have diverse views but actually making sense of each of these views and the standards is a thread that brings all those views and perspectives together. So, because from an Australian perspective what the three things that we focus on is making sure we use AI responsibly and we can scale it. To do that we need interoperability and that’s why we use standards not only as a way to check international best practice but also to learn from international best practice because when you have 100 experts from government, academia and industry together in the room that are discussing the best practice for responsible AI, this is a great resource to inform local policy but also develop our expert and grow the industry.

Florian Ostmann:
Thank you, great. I mean in many ways we’ve only sort of scratched the surface during the last 90 minutes, we could easily spend another hour or two discussing, but I’m glad we got as far as we did, and I do hope that what we were able to cover sort of spiked your interest for those of you who might be entering the standard space without a background, for those of you who are already involved, to sort of get those different perspectives from around the world, and for all of you, you know, going back to sort of the motivation for the session and the discussion around international collaboration, we’d be really, really interested if you have ideas on collaborating and, you know, joining up initiatives from across the fields that you’re working, and we’d be really interested and would love to hear from you, so please do reach out to us if you have ideas for working together. I think that’s the main message to end on, and other than that, all that is left to do, I think, is to thank everyone, thank our esteemed panelists, thank you for joining online across different time zones, and yeah, thank you Nikita for being in the room, and thank you to my colleagues Matilda and Sunny for being on the stage. So thank you everyone, and let’s hope that, you know, there’ll be a continuation of these discussions, and yeah, to see many of you again in one way or another. Thank you.

Ashley Casovan

Speech speed

168 words per minute

Speech length

1097 words

Speech time

392 secs

Audience

Speech speed

172 words per minute

Speech length

1396 words

Speech time

487 secs

Aurelie Jacquet

Speech speed

133 words per minute

Speech length

1389 words

Speech time

625 secs

Florian Ostmann

Speech speed

171 words per minute

Speech length

5342 words

Speech time

1874 secs

Matilda Road

Speech speed

160 words per minute

Speech length

1346 words

Speech time

506 secs

Nikita Bhangu

Speech speed

164 words per minute

Speech length

1599 words

Speech time

586 secs

Sonny

Speech speed

189 words per minute

Speech length

1057 words

Speech time

335 secs

Wansi Lee

Speech speed

173 words per minute

Speech length

1503 words

Speech time

523 secs

How to retain the cyber workforce in the public sector? | IGF 2023 Open Forum #85

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Martina Castiglioni

The European Cyber Security Competence Centre (ECCC), operational from this year, plays a key role in the ambitious cyber security objectives of the Digital Europe Program and the Rise of Europe Programs. Together with member states, industry, and the cyber security technology community, the ECCC aims to shield European Union society from cyber attacks. It is a positive development that demonstrates a proactive approach to cyber security in Europe.

However, despite numerous cyber security initiatives, the skills gap remains a significant challenge. While public and private investment initiatives aim to close this gap, the situation is still concerning. Simply having a large number of initiatives does not guarantee a reduction in the skills gap. This ongoing issue requires further attention and efforts to ensure a skilled workforce meets the demand for cyber security professionals.

On a positive note, the Cyber Security Skills Academy serves as a single entry point for cyber security education and training in Europe. Supported by โ‚ฌ10 million in funding, the academy aims to develop a common framework for cyber security role profiles and associated skills, design specific education and training curricula, increase the visibility of funding opportunities for skills-related activities, and define indicators to monitor market progress. The existence and support for the Cyber Security Skills Academy are promising steps in addressing the skills gap and providing comprehensive education and training opportunities for those interested in cyber security.

In conclusion, the European Cyber Security Competence Centre (ECCC) actively works towards achieving the cyber security goals of the Digital Europe Program and the Rise of Europe Programs. However, the persistent cyber security skills gap remains a challenge that needs attention. Efforts are being made through various investment initiatives, and the establishment of the Cyber Security Skills Academy shows promise in bridging this gap. By prioritising education, training, and skill development, Europe can strengthen its cyber security capabilities and effectively protect its society from cyber threats.

Audience

According to the information provided, Sri Lanka is currently facing challenges in implementing cybersecurity policies. Despite the development of a five-year policy for cybersecurity, the implementation process is proving to be difficult. This negative sentiment suggests that Sri Lanka is struggling to effectively address cybersecurity issues and protect its digital infrastructure.

In addition to the cybersecurity challenges, Sri Lanka is also experiencing a talent deficit in the IT sector. It has been highlighted that there are around 30,000 vacancies for graduates in the IT industry. This negative sentiment underscores the need for more qualified professionals in the field to meet the demands of the growing industry. It implies that the lack of skilled talent could potentially hinder the growth and development of the IT sector in Sri Lanka.

However, amidst these challenges, there is a glimmer of positivity in the form of strong collaboration. The speaker emphasises that building capacity within the government can only be achieved through collaborative efforts. This positive stance recognises that partnerships and cooperation between different stakeholders are crucial in improving the government’s ability to address various issues, including capacity building. It implies that by working together, the government can enhance its capabilities and effectively meet the demands of the ever-evolving digital landscape.

Furthermore, it is acknowledged that the digital world is inherently imperfect, and no system is completely safe from hacking. The speaker provides examples, such as the Pentagon and the White House, to support this argument. This negative sentiment highlights the notion that despite advancements in cybersecurity measures, there will always be weaknesses that can be exploited by hackers. It suggests that the focus should not solely be on finding a foolproof solution, but also on continuously improving and adapting cybersecurity measures to mitigate risks.

In conclusion, Sri Lanka is currently facing challenges in implementing cybersecurity policies and addressing the talent deficit in the IT sector. However, there is optimism for building capacity within the government through strong collaboration. It is also acknowledged that there is no foolproof solution for preventing hacking, as systems will always have vulnerabilities. These insights highlight the need for ongoing efforts to strengthen cybersecurity measures and foster collaboration to effectively address digital challenges in Sri Lanka.

Yasmine Idrissi Azzouzi

The global shortage of cyber security professionals is a pressing issue, with a current deficit of 3.4 million individuals. Unfortunately, the public sector faces difficulties in competing for talent due to a lack of funding. To bridge this workforce gap, it is crucial to raise awareness about the diverse range of roles within the cyber security field and its multidisciplinary nature. Contrary to popular belief, cyber security is not solely a technical domain but encompasses various disciplines.

Addressing the underrepresentation of certain communities, including women and youth, in the cyber workforce is essential. By promoting inclusivity and diversity within the field, we can encourage more individuals from these communities to pursue careers in cyber security. This aligns with the goals of SDG 5: Gender Equality and SDG 4: Quality Education.

Furthermore, there is a revolving door between the public and private sectors in cyber security. To attract and retain qualified professionals, it is imperative to invest in their development and well-being. Upper-level positions face a significant shortage, and professionals in the public sector often experience excessive workloads. This highlights the importance of investing in cyber security professionals to ensure an efficient and effective workforce.

To address these challenges, it is proposed to appeal to individuals’ sense of purpose and prestige. Promoting the opportunity to work for the government and contribute to national security can be enticing to potential candidates. By framing the cyber security field as challenging and impactful, it becomes more attractive to individuals seeking meaningful work.

In conclusion, the shortage of cyber security professionals is a global concern that requires immediate attention. Raising awareness about the diverse range of roles, addressing underrepresentation in certain communities, investing in professionals, and promoting the sense of purpose and prestige associated with working in the field are vital steps to bridge the workforce gap. By doing so, we can ensure a more secure digital landscape and contribute to the goals of SDG 8: Decent Work and Economic Growth.

Marie Ndรฉ Sene Ahouantchede

The ECOWAS region, encompassing West African countries, is currently grappling with escalating cybersecurity challenges due to the rapid advancement of digital technology. This digital transformation brings about new opportunities for malicious cyber activities, resulting in a negative sentiment towards the region’s cybersecurity landscape.

One significant issue exacerbating the situation is the acute shortage of skilled cybersecurity professionals. The percentage of government and public sector organizations equipped with the appropriate cyber resources to meet their needs is alarmingly low, standing at just 29%. Furthermore, projections indicate that by 2030, an estimated 230 million people in Africa will require digital skills, highlighting the pressing need to address the inadequacy of skilled cybersecurity professionals to meet this demand. The limited supply of these professionals in the ECOWAS region is viewed as a negative contributing factor to the cybersecurity challenges.

However, it is encouraging to note that ECOWAS and West African governments are taking proactive steps towards mitigating the situation through the implementation of positive cybersecurity education and training initiatives. Under the umbrella of the Organization of Computer Emergency Response Teams (OCYC), the ECOWAS Commission launched the ECOWAS Regional Cybersecurity Hackathonโ€”an event aimed at fostering innovation and collaboration to address cybersecurity challenges within the region. Additionally, an advanced training program was provided to member states, focusing on enhancing their capabilities in managing and responding to computer security incidents in 2020. These initiatives indicate a positive effort being made to strengthen cybersecurity education and training in the region.

A significant concern facing African countries is the brain drain in the field of digital professions. Despite endeavors to attract digital professionals, the public sector’s salary policy remains insignificant when compared to the global digital talent shortage. This brain drain further exacerbates the shortage of skilled cybersecurity professionals in the ECOWAS region, compounding the challenges faced and reinforcing the negative sentiment.

As a recommended course of action, the inclusion of education and training initiatives, alongside public-private partnerships, within the national strategy is deemed crucial to addressing the talent shortage in the field. Noteworthy examples include Benin’s Ministry of Digital Affairs collaborating with the Smart Africa Digital Academy to develop cybersecurity education, and the signing of a Memorandum of Understanding between Togo and the United Nations Economic Commission for Africa (UNECA) to establish the African Center for Coordination and Research in Cybersecurity. These partnerships demonstrate the importance of collaboration and concerted efforts across various sectors to bridge the talent gap and bolster cybersecurity capabilities.

In conclusion, the ECOWAS region is facing significant cybersecurity challenges as a result of digital transformation, leading to a negative sentiment. The shortage of skilled cybersecurity professionals aggravates the situation, further compounding the negative sentiment. However, ECOWAS and West African governments are implementing positive cybersecurity education and training initiatives, countering the shortage to some extent. African countries are experiencing a brain drain in the digital professions, adding to the challenges faced. Education and training, in conjunction with public-private partnerships, are recommended as integral components of the national strategy to combat the talent shortage. These insights highlight the need for concerted efforts within the region to strengthen cybersecurity capabilities and address the evolving cybersecurity landscape.

Regine Grienberger

The discussion centres on the crucial requirement for cyber experts within the public sector to ensure digital sovereignty. The need for digital sovereignty is being deliberated in both Germany and the European Union. It is argued that governments must have control over their own networks to assert their sovereignty in the digital realm.

To address this issue, it is suggested that a portion of the digital or digitisation budget be allocated for cybersecurity measures. Specifically, the cybersecurity agency recommends setting aside 15% of the budget for this purpose. Additionally, pooling cybersecurity services for multiple public institutions and moving data to the cloud are seen as effective strategies to strengthen cybersecurity in the public sector.

Another important aspect highlighted in the discussion is the need to increase cyber literacy amongst the workforce. It is acknowledged that humans often form the weakest link in the cybersecurity chain. To mitigate this, there is an idea to conduct a cybersecurity month in October, during which colleagues can be informed about various cyber threats and receive training on how to handle them.

Furthermore, it is emphasised that the public sector requires not only technical experts but also individuals who possess the ability to effectively communicate with management. The importance of having employees with a dual skill set, generic knowledge combined with cyber expertise, is highlighted. It is suggested that such individuals can be hired and then upskilled or reskilled while on the job.

In an interesting proposition, one speaker advocates for job rotation instead of retaining trained experts solely in the public sector. This would involve training individuals within the public sector, releasing them to work in private companies, and subsequently gaining them back later in their careers. This proposal aims to provide a more comprehensive skill set for cyber experts and foster collaboration and knowledge exchange between the public and private sectors.

Overall, the discussion centres on the various strategies and recommendations to address the shortage of cyber experts in the public sector and enhance digital sovereignty. By implementing these measures, it is believed that the public sector can effectively tackle cyber threats and safeguard national interests in the digital domain.

Lara Pace

The analysis examines several aspects of cybersecurity in both the public and private sectors. It begins by discussing the potential benefits of job rotation from the public to the private sector in cybersecurity. Understanding the challenges faced by the public sector within the private sector can lead to innovative solutions. Laura’s experience transitioning from the public to the private sector while focusing on global cybersecurity serves as evidence. This suggests that job rotation can positively enhance cybersecurity expertise and knowledge transfer between sectors.

The analysis then addresses the issue of retaining cybersecurity professionals in the public sector. Creating a clear and inclusive environment with well-defined career pathways is essential for keeping professionals. The report notes that professionals, including those in cybersecurity, have a natural desire to progress. By offering attractive career advancement opportunities and fostering an inclusive workplace culture, the public sector can improve retention. This argument is supported by the idea that a supportive work environment leads to higher job satisfaction and employee loyalty.

In terms of incentivization in cybersecurity, the analysis takes a neutral stance, suggesting that incentives do not have to be solely monetary. While specific evidence or arguments are not provided, the report proposes that recognition, career development opportunities, and job flexibility can be effective motivators for cybersecurity professionals. This implies that non-monetary incentives can attract and retain skilled individuals in the field.

The analysis also emphasizes the importance of effective human resource training in cybersecurity, paired with job creation initiatives. Currently, cybersecurity training often happens in isolation, leading to trained personnel leaving their geographic region. To address this, the analysis recommends a coordinated national effort that integrates comprehensive training programs with job creation strategies. This holistic approach can bridge the cybersecurity skills gap and provide more employment opportunities.

Lastly, the analysis acknowledges that cybersecurity is not always a top national priority. It suggests that when implementing initiatives, it is crucial to consider concurrent efforts that prioritize job creation. This ensures that cybersecurity professionals trained in the country remain in the field. It highlights the need for a balanced approach that aligns cybersecurity goals with other national priorities, such as industry and innovation.

In summary, this analysis provides insights into various aspects of cybersecurity in the public and private sectors. It discusses the benefits of job rotation, the importance of creating an inclusive environment for talent retention, and the value of non-monetary incentives. Additionally, it emphasizes the integration of training and job creation as a coordinated effort and advocates for balancing cybersecurity priorities with other national initiatives. These findings and recommendations contribute to a comprehensive understanding of cybersecurity and provide guidance for policymakers and organizations in navigating this evolving landscape.

Komitas Stepanyan

The analysis explores the urgent need to enhance the pipeline for cyber security professionals in Armenia. To address this issue, a range of initiatives has been implemented in the country. One initiative involves collaborating with renowned universities in Armenia to develop and nurture a skilled workforce in the field of cyber security. Furthermore, a campaign led by the deputy governor of the Central Bank of Armenia aims to raise awareness about the career opportunities and importance of pursuing a career in cyber security.

Specialized training is seen as vital in enabling professionals to effectively recognize and respond to cyber incidents. These training programs focus on incident response, forensic research, and compliance/audit of cyber security incidents. By equipping professionals with these specialized skills, they will be better prepared to handle and mitigate cyber threats and attacks.

In addition, the analysis highlights the unique appeal and satisfaction that can be derived from working in the public sector. While monetary motivation is important, the impact and sense of purpose associated with public sector work are highly valued. Public sector professionals have the opportunity to make a difference in the lives of thousands or even millions of people.

Efforts are underway to establish a nationally recognized Computer Emergency Response Team (CERT) in Armenia. This is essential for effectively responding to and managing cyber security incidents at a national level. Additionally, there are plans to apply for membership in FIRST, an international organization focused on incident response. These efforts demonstrate a commitment to enhancing cyber security capabilities and collaborations with global counterparts.

In conclusion, the analysis underscores the need to expand the pipeline of cyber security professionals in Armenia. Collaborations with universities, specialized training programs, the appeal of public sector work, and the establishment of a national CERT and potential membership in FIRST are all key components in fortifying the country’s cyber security landscape. These initiatives are crucial for addressing cyber threats, safeguarding critical information systems and infrastructure, and ensuring a secure digital environment.

Laura Hartmann

According to the World Economic Forum’s Future of Jobs report, there is currently a global shortage of 3.4 million cybersecurity professionals. This shortage is largely due to the increasing digital economy and the rising threat of cyber-attacks. The speakers highlight the need for a growing number of skilled individuals in the field of cybersecurity to address these challenges.

One of the main issues discussed is the public sector’s struggle to retain cyber professionals. Due to the lack of funding, many public sector organisations are finding it difficult to compete with private sector companies in attracting and retaining talented individuals in the cybersecurity field. This poses a significant problem considering the increasing number of cyber-attacks that require effective cybersecurity measures.

To tackle this issue, the speakers suggest the implementation of cross-industry initiatives and cyber capacity-building initiatives. Cross-industry initiatives involve collaboration between different sectors to raise awareness and address the issues related to cybersecurity. This approach allows for a broader perspective and a more comprehensive response to the challenges faced in the digital world.

Furthermore, the speakers emphasise the importance of holistic approaches starting from education. They argue that raising awareness about cybersecurity and building a solid foundation of knowledge in this field is crucial for public safety. This holistic approach also involves management understanding the need for investment in cybersecurity.

The analysis also reveals a positive sentiment towards cyber capacity-building initiatives, especially for developing countries. The speakers mention initiatives implemented by GIZ, commissioned by the Federal Foreign Office of Germany, to improve cyber capacity in partner countries. This highlights the importance of addressing the shortage of skilled professionals in the cybersecurity field not only in developed nations but also in developing nations.

In conclusion, the analysis highlights the growing global shortage of skilled professionals in cybersecurity due to the increasing digital economy and the threat of cyber-attacks. The public sector faces difficulties in retaining cyber professionals, and cross-industry initiatives and cyber capacity-building initiatives are proposed as solutions. A holistic approach, starting from education and raising awareness, is crucial for public safety. Additionally, the importance of addressing the shortage of skilled professionals in the cybersecurity field in developing countries is emphasised.

Session transcript

Laura Hartmann:
Okay, hello. Yes, so welcome everyone today, the audience in the room and also joining virtually for our open forum on how to retain cyber professionals in the public sector. My name is Laura Hartmann. I work for the German Development Agency, GIZ, specifically on cyber capacity building, and I’ll be moderating the session today. We are very privileged to be joined by a distinguished panel with speakers on site and joining virtually from the public and private sector, who will bring in various perspectives and share national, regional, and global insights, and also present very concrete initiatives on how this issue could be addressed. For the audience on site and online, you’ll have the chance to actively join the discussion after our speakers’ inputs, and before we start the session and before I give it over to the speakers, I’ll give you a few framing points why we are here today to discuss this. So, I think we are all aware that our digital economies are expanding, technologies are getting more sophisticated, and the number of cyber attacks and incidents is rising. What is not rising to a required extent is the number of cyber professionals protecting our infrastructures. So, according to the Future of Jobs report published by the World Economic Forum this year, there’s a shortage of 3.4 million globally in cyber security, and to support our global economy, the number is only rising. Secondly, organizations are currently competing for talent, mainly by paying more and more to the same pool of people. So, one could argue that this exacerbates the staff shortage, and also the public sector cannot compete here because of a lack of funding. And thirdly, the whole topic is particularly important with an eye on developing cyber capacity building initiatives. So, as GIZ, for example, we are commissioned by the Federal Foreign Office of Germany to implement cyber capacity building initiatives with partner countries from different contexts, and since the start, since we have implemented, since we started implementation, partners, all partners voiced the same issue that they fear that re-skilled, up-skilled people will just leave the public sector. So, our panel aims today to find answers to this challenge, and how to close this gap in cyber security professionals, especially for the public sector. I’m happy to introduce you to our first speaker of our session now, which is Yasmin Idrissi-Azouzi. Yasmin is a cyber program officer at ITU, and works in the Bureau of Development. She’s been leading cyber capacity building projects mainly for women, but also in the field of child online protection. She’s involved in national cyber policymaking, strategy development, and capacity

Yasmine Idrissi Azzouzi:
building. So, Yasmin, over to you. Thank you. Thank you very much, Lavra, and for this very timely and very important topic as well, thank you for the invite to share. So, what I think is that it really boils down to making the field attractive, and the way to do that is twofold. Of course, quantitative measures are important to attract people. In fact, many communities are underrepresented in the cyber workforce, including women, including youth, and people often just don’t envision themselves in cyber security jobs. They’re not aware of the many opportunities that are present in the field in this important and growing workforce. There needs to be a need to raise awareness on the types of roles that are needed. Cyber security is just not technical, it’s highly multidisciplinary, and we often forget that the public sector is also schools, securing schools, securing hospitals, securing ministerial departments, and other key critical infrastructure for some countries. So, the need is really there to focus on attracting marginalized communities as well. One way of doing that, the public sector can do this as well, sometimes better than the private sector, and that’s offering some benefits like gender-sensitive work arrangements, childcare, parental leave, etc., so that there’s also a better inclusion of women. And so, that’s on attracting people on a quantitative point of view, but also, it’s important that once they’re there, they need to stay, and the idea is to be able to retain them, and the best way to retain them is through some qualitative measures. One of them is to offer opportunities for career progression and leadership roles, even for people that have technical profiles, be able to offer them that capacity to jump from a fully technical role to one more of leadership and maybe policy. And there needs to be this accommodation, in a way, for multidisciplinary roles. What is often observed is that there is a shortage among upper-level positions as well, so we need to just acknowledge that there is a revolving door between the public and the private sector, and really invest. So, the investment should be twofold. Obviously, investment in people is key, but also investing in technology. I mean, cybersecurity professionals in the public sector are often very overworked, doing the job of several people sometimes. So, sometimes, investing in software that can automatize some aspects can help, can be, you know, certainly of help, but of course, I think the most important is really to invest in people. The idea is to, you know, have people in your institutions be encouraged to take part in capacity-building programs, and at the ITU, we do offer them for the public sector, for example, and I would take a little moment to explain a couple of them. So, one is the cyber drills, and these are these comprehensive holistic exercises that are cross-country, cross-regions, that are for people in the public sector taking care of national policies and national incident response as well, and these are usually for also for exchange between countries and trying to understand the lessons learned and the common challenges. Another project and program that we have for people in the public sector, very specifically for women, and that’s her cyber tracks, in which we are collaborating with the foreign federal office of Germany, with Regina here, and the GIZ as well, and the idea of this is to, of course, allow for women not only to participate in national cyber policy making and cyber diplomacy at international level, but also to do it meaningfully through, obviously, training in understanding better the diplomacy, better the policy for the technical profiles, but also having a holistic approach of providing mentorship, providing role models, and networking, which is something that is definitely key in this field, trying to understand that the challenges that you are going through, other people have gone through, and that you’re not sort of alone on this. So, as introduced by Laura at the beginning, there are, of course, the salary motivator that often is an issue between public sector and private sector, but what the public sector can do is offer what we call long-term motivators that are not like a higher salary, which is a short-term motivator. So, one thing that we can focus on is really to promote that this is an opportunity for working for something that is challenging, try to appeal to people’s sense of purpose, which, of course, is not only the case in cyber security, but in the public sector in general, trying to see if you believe in a mission where you are contributing to something that is for the betterment of your overall country and society. And one thing that we can do, and that is often overlooked, and in the cyber security community, we always sort of have this echo chamber, but it’s important to look at other fields. You know, there are a lot of studies on turnover rates and maintaining sort of employee engagement in the public sector, and the public sector does have some strengths. Obviously, job security, stability of income, some non-financial benefits, as I mentioned, you know, parental leave, paid leave, pensions. Also, of course, I mean, the prestige for working for one’s government. So, meaningfulness, definitely accomplishing something of real value, satisfaction and pride of the work that has been performed, and the possibility to progress as well. But one thing, and that’s last but not least, I will really conclude on that, that’s the recognition, because with all the training in the world, I think I like this sort of metaphor of a plant. You know, you can give it all the water, the nutrients it needs. In this case, obviously, it would be education and training, but it will not grow without sunlight, without, you know, shedding the light on it. So, shedding light on people’s accomplishments and giving them value as people, not just, of course, numbers, is definitely something that can be helpful in attracting more people and retaining them, and making them also proud to be working in the cyber security workforce for their government. Back to you, Laura. Thank you, Yasmin, for the input. So, I think

Laura Hartmann:
the key points that we heard from Yasmin was that we should pay more attention, or in the public sector, more attention should be paid on promotion strategies. So, really shifting the awareness of people that are maybe not so attracted to the field, cyber security, as a working field. So, underrepresented groups, marginalized groups, women, so that they, so that we can give them the possibility with cyber capacity building initiatives to join the workforce. Also, I think the opportunity for technical staff to switch to leadership roles, managerial roles, and vice versa. I think these are some key points that we should keep in mind for the discussion later, and I would now give the floor to one of our speakers online, Martina Castiglioni. She’s a program officer of working in the ECCC, which is the European Cyber Security Competence Center and Network. Before joining the ECCC, she was the head of training and advisory services at the Italian Cyber Security Competence Center, and was heavily involved in the development of cyber security training campaigns at national level. She worked in cyber security advisory services and supported operators of critical infrastructures, and her expertise includes cyber security risks management, governance, cyber security auditing and assessments, and crisis management. Thank you, and over to you, Martina.

Martina Castiglioni:
Good morning, everyone. Could you hear me? Yes, we hear you. Okay, thank you. Could you also see me? Not yet. We cannot see you at the moment. Okay. Maybe that still changes. Okay. So, by the way, good morning, everyone, or good afternoon. Firstly, thanks for inviting me here, and this is really the third time I will try to bring here a glimpse of what is going on in Europe regarding this topic, and in particular, what is going on under the competence of the European Cyber Security Competence Center, ECCC. As mentioned by Laura, indeed, I’m working for this new center focused on cyber security. I’m indeed calling from Bucharest, where the European Cyber Security Competence Center is located, and let me introduce in a few words the role of this center. The center was established in 2021, but is operational from this year, and the center, together with the member states, together with the industry and the cyber security technology community, has the aim to shield the European Union society from cyber attacks and maintain research excellence and reinforce the European Union industry in this field, and also boost the development and deployment of, of course, advanced cyber security solutions. The center, the ECCC, will play a key role in delivering on the ambitious cyber security objectives of the DEP, the Digital Europe Program, and the Rise of Europe Programs, and these two programs, of course, have also the aim to narrow the cyber security skills shortage of Europe, both in public and in private sector. I would like to explain better that this center, the ECCC, of course, is operating in a new framework, European Cyber Security Framework, because it’s working together with the so-called network of NCC, so National Coordination Center. So, in this way, each member state will have a contact point under the, a contact point for the European Cyber Security Competence Center, and we receive fundings from a European program, and we’ll develop cyber security capacity building activity at national and regional level, and promote cyber security educational program, programs at national and regional level, under the big hat of Europeans, of the European Cyber Security Competence Center. In this way, we aim to facilitate the collaboration and the sharing of expertise and capacities among Europe, in particular among the research, the academia, the public authorities, under the so-called Cyber Security Competence Community, that is the third player of this new European Union framework, after the Competence Center, and after the NCC’s networks. So, of course, the initiatives that are going on in Europe to tackle the topic of this agenda, based on the factโ€ฆ fact that the security of the European Union cannot be guaranteed without the most valuable asset, our people. So as shown by the latest report at European level, mainly published by INISA, a larger number of cyber security incidents have also targeted public administration and governments in member states and public bodies at European and national level. So we need professionals with the skills and competencies to prevent and defend the Union, including its most critical infrastructure, against cyber attacks and ensure its resilience. But also we need skilled people that are needed to implement the cyber security legislations to deliver the cyber security legal and policy requirements, otherwise those pieces of legislations will not achieve their objectives. So regarding the initiatives that are going on so far in Europe, of course we have many public and private investment initiatives focusing on closing the cyber security skills gap also in the public sector. Of course these initiatives already existed, but the situation shows that the cyber security skill gap still represents a huge issue and it might be referring to the lack of synergies and coordinated action that have been taken so far to close the cyber security skill gap. Indeed the Europe’s Digital Decade Policy Programme 2030 has set the target of increasing the number of ICT professionals by 20 million by 2030 and also narrowing the cyber security skills gap that we have in the public sector by the cyber gender skill gap that we have in this field. I would like to mention this concrete initiative that is going on, that has been established in particular this year, the so-called Cyber Security Skills Academy, the academy, I will refer to this with the word academy, that will be our single point of entry and synergies for cyber security education and training and will offer also funding opportunities and specific action for supporting the development of cyber security skills. Of course the main focus will be on skilling the cyber security professionals in Europe and the implementation of this academy will be supported by 10 million funding from the so-called Digital Europe Programme and the European Cyber Security Competence Centre indeed will implement the strategic objective cyber security under this programme. The academy so far has its concrete representation in a dedicated website that is available, is public available, but strategically and operationally the academy refers to four pillars. The first one is fostering knowledge generation through education and training by working on a common framework for cyber security role profiles and associated skills. And here I would like to mention that ENISA has defined a specific European cyber skills competence framework that defines the roles and profiles competencies of cyber security professionals and this will be our first basis for the academy to define and assess the current skills, the skills that we need and monitor the evolution of the skills gap and provide indication of specific new needs also for the public sector. Another important pillar will be of course designing a specific cyber security education and training curricula suitable for these specific roles and here I would like to mention the project CyberSecPro founded by DEDEP which brings together 17 higher education institutions and 13 security companies from 16 member states in order to collect the best experience and become the best practice for all cyber security training programs that will be developed under the academy. Then I would like also to mention that this is a responsibility of each member state. Indeed the NCCs, the National Coordination Centre, are invited to explore how to set up the so-called cyber campus in each member state. The cyber campus would aim at providing pools of excellence at national level for the cyber security community and the academy will help their networking and the coordination of the activities of the different cyber campus in each member state. This responsibility that each member state has, it refers to a specific piece of our legislation, European cyber security legislation, because each member state in Europe according to European legislation should adopt as a part of the national cyber security strategy specific measure in view of mitigating the cyber security skill shortage. Then I would like to mention briefly the last important pillar of this academy initiative and then I will close. I know that the time is running out. Of course we understood that we have to better gain visibility of the different initiatives that are going on in Europe. So another important pillar of the academy will be ensuring a better visibility over the available funding opportunities for skills-related activities in order to maximize their impact. And in this objective, I would like to mention that there is going on a specific working group managed by the European Cyber Security Competence Centre in collaboration with the NIS and with the Commission that has the aim to map all the initiatives and all the cyber security training initiatives and all the cyber security funds opportunities related to the narrowing the cyber security skill gap. So in this way with a better overview and an efficient overview of the current cyber security funds related to this specific topic, of course this will help to better define the priorities in terms of fundings of this academy and of the Digital Europe programme broadly. Then I will close this overview of the academy. Of course I mentioned that another important pillar of this academy will be defining indicators to monitor the evolution of the market and be able to assess the effectiveness of the actions of the academy. So under the academy a specific methodology will be developed that will allow to measure the progress to close the cyber security skills gap. We will be defining specific cyber security indicators to monitor the evolution of the cyber security labour market in order to be able to assess and to re-elaborate and to adjust funding opportunities and the activities that are going on. There will be specific KPIs on cyber security skills by the end of 2023 that will be elaborated by ENISA and in this way with this KPI we will be able to collect data on indicators and also report on them with the first collection by 2025. Because another issue that has been revealed so far is that we didn’t have a specific report about cyber security skills gap based on common indicators, common KPIs. And of course this has been an issue to better identify the priorities to take with the issue of cyber security skills gap in particular also at public sector. So with this new advancement in terms of elaborating KPIs, elaborating reports, in this way of course this will be an important pillar for the overall achievement of the cyber security academy. I will come back to you Laura and sorry if I take more time.

Laura Hartmann:
Thank you Martina. So highlighting what you just said, just two highlights. Thank you first of all for giving us an overview of what the EU institutions and agencies do. And I think it’s really interesting what you said on the lack of synergies between them on regional level. So this highlights the need for even more coordinated approaches on cyber capacity building again. Also to retain, re-skill, attract and up-skill people. And it’s reassuring that the ECCC takes initiative on EU level. And then also I think what came clear is that we need to really focus on the concrete role. So what kind of professions do we want to have in the jobs market. So these have to be first identified, pointing to the need of conducting studies on that. So what kind of job roles are really needed because if you Google that, if you research for that, it’s always about the workforce gap in cyber security but never about concrete roles. And so yeah, coming back to the panel and the speakers, happy to give the floor now to Komitas Stepanian on my left. Komitas is the Technology and Cyber Security Director at the Central Bank of Armenia. And he’s part of the team working for national level digital transformation including cyber security in Armenia. He’s a short-term consultant for the World Bank for Digital Transformation and GovTech activities and also works partly for the IMF for cyber security initiatives and programs. Komitas, please have the floor.

Komitas Stepanyan:
Much better. Thank you very much for this opportunity to speak in this very important panel. Actually, you have already mentioned a couple of important things like there is a huge shortage of cyber professionals and how to fill this gap. And another thing that cyber security is quite wide. Everyone talks about the cyber security skills shortage but what kind of special shortage do we need. For example, if you Google, you can find that right now almost all companies including public sector, they need database admins, network admins, people who can really understand how the entire technology server infrastructure works. These are little technical but these are part of cyber security. Overall, let me explain how Central Bank of Armenia is a public institution and how my country overall, what are we doing, how we are trying to fill this skill gap in the public sector. First of all, we need to increase the pipeline. To do that, there was an interesting initiative and campaign run by the leadership of the Central Bank of Armenia. Imagine the deputy governor of Central Bank of Armenia was leading the team and we met most recognized universities in Armenia, top five universities in Armenia to talk with the management first of all and to see what kind of programs, what kind of specific subjects or syllabuses can be developed for different universities. And secondly, we met all students to promote this activity. I was dreaming that when I was a student, somebody could come to my university to talk about this initiative that for example, being a young student in second or third grade, I can have a chance to join to the public sector and work for the Central Bank of Armenia or Minister of Finance or other public institution. So it had a huge impact and we had great response from different universities. We continued this campaign. We collected more than 300 CVs and did an interview to identify 30 talents to torture them for six months to have a team working for cybersecurity. Currently, it’s very important to have incident response team for any public institution. This is one of the main strategic objectives. Two years ago, we established an information system agency who is responsible for three main pillars. First is digital identification, national level. Second is the interoperability of the public institutions and not only public institutions. And third pillar is cybersecurity. So this institution is responsible for working with academia, with different universities to create specific subjects based on our needs to fill this gap. After that, we work with the international recognized organizations to provide specific training for 25 people, which has been carefully selected from different public institutions, from different ministries, agencies, including Central Bank and a couple of commercial banks as well. We had special training for incident response, for forensic research and for compliance and audit of cybersecurity incidents. Because once again, if there is not enough capacity to recognize that there is an incident, then it’s going to be end of day. And according to the statistics, like cyber breach identification, an average time for cyber breach identification is over 200 days, according to the statistics. So imagine cyber criminals are hacking your environment and after 200 years, you can be able and some institutions, unfortunately, are not able to identify that their systems are breached. So this was the second initiative. And right now, this process is going on and I’m waiting to the results. The exam will be at the end of October. It’s an ongoing process and I hope a couple of my colleagues will get certified, will become certified for incident, cybersecurity incident response, which is really very important. And we would like to continue this training program and a couple of them who are already certified can become like trainers or they can share their experience with others. We are also closely cooperating with the private sector because we all know that lots of good professionals work for private sector. We have already heard that public sector cannot be attractive if you are looking only salary. Private sector pays more, but public sector has its own beauty because we are working for a mission. Mission and challenge is more important for many, many ones. So during your career, mainly at the middle of your career, you may rethink that money is a very short motivator, as Yasmina already mentioned. And I fully agree with this. But the objective, what are you doing? And when you empower your young colleagues that whatever you do will have an impact on thousands or maybe million people, it can be the most great motivator for them. So we are continuing this program and I hope after another year we’ll have more certified professionals. Also we are working for setting up a national CERT. And it’s also an ongoing process and after that we would like to have a recognized national CERT to work with the international other CERTs and after that we’ll apply for being a member of FIRST. At this moment, I hope this helps as a concrete example.

Laura Hartmann:
Thank you very much for your points, Komitas. the most or what became clear is mission and purpose, right, that you mentioned that can account for maybe the lack of funding or just so private companies are paying more. It’s the top one argument for the lack of cybersecurity skills in the public sector. Happy to take this into discussion later. So just to speed up a bit because we’re running a little bit short in time, I’ll give the floor now to the second virtual speaker that we have today, Marie Ndeye-Sene Ahuangde. She is a digital specialist with 22 years of experience in the field of information systems and digital transformation and in August last year she joined the ECOWAS Commission where she works as a program officer for your applications and e-government and is there responsible overall for coordination and the driving of the digital transformation efforts on behalf of the Commission in ECOWAS. The floor is yours, Marie.

Marie Ndรฉ Sene Ahouantchede:
Thank you, Laura. Good morning, everyone. I’m very delighted to have this opportunity to join the panel. So thank you on behalf of our Commission in question. ECOWAS is the economic community of West African states. I’m not the main resources in cybersecurity, but I will try to share in this panel ECOWAS approach and perspective on cybersecurity, especially on the workforce. So ECOWAS… One second. Can you try to switch off your video so that we also have the chance to see you in the room? Thank you. Can you see me, please? Yes, we can. Perfectly. All right. So ECOWAS region and widely Africa face growing cybersecurity challenges. I’m sure that you are aware of that. And this is a result of the digital transformation that has given rise to new opportunities for malicious cyber activities. Many studies have pointed out the urgent need for a skilled workforce capable of effectively answering the growing cyber threats that the region is facing. Globally, in digital skills, it was announced in 2023 at the 12th Assize for African Digital Transformation in Madagascar that the need for digital skills in Africa will reach 230 million people by 2030, while just the 5% to 10% of this need are covered depending on the country. Specifically, if we talk about cybersecurity, the CAPMG’s 2023 cybersecurity outlook mentioned that the percentage of government and public sector organization with appropriate cyber resources to meet their needs is only 29%. Given the persistence of the critical needs for cybersecurity professional, the coordinated regional approach of ECOWAS, the ECOWAS cybersecurity agenda, aims to increase cyber resilience in the region and to support member states in strengthening their capacity building, which certainly requires the availability of a skilled cybersecurity workforce. The supply of cybersecurity specialists in ECOWAS region, I can say, is under capacity in the face of the exploding demand. To address this concern, ECOWAS Commission and West African government are multiplying efforts in cybersecurity education and training initiatives, identification of cybersecurity talents, capacity building cooperation, partnership, and awareness. For example, at the regional level, in collaboration with EU, the OCYC project is the West African response on cybersecurity and fight against cybercrime was set up. As a mean of building a sustainable cyber workforce in the region, under OCYC, the ECOWAS Commission launched the ECOWAS Regional Cybersecurity Hackathon. This hackathon helped to build a regional pool of cybersecurity youth to assess the level of the region’s maturity in terms of skills in cybersecurity and fight against cybercrime and also to increase the interest of youth in digital security. To upskill professional, still under OCYC, the ECOWAS Commission and its partner are supporting the judicial authorities of the region to tackle the need through global capacity building initiatives. In the same dynamic, I can say that the advanced training was provided in 2020 to member states with the computer security incident response team in order to enhance capabilities for handling cyber incidents and managing threats. As part of operation in cybersecurity, a joint platform G7 ECOWAS for advancing cybersecurity was launched also in September 2022 to increase partnership for the continued implementation of ECOWAS cybersecurity agenda for resilient cyberspace in West Africa. At the national level, the country in West Africa have adopted a series of cybersecurity measures including the development of cybersecurity education. As an example, I can cite the National Digital Academy that was being launched to offer advanced training to Beninese trainers and managers in ICT, artificial intelligence, and cybersecurity. This is the outcome of the partnership between Benin’s Ministry of Digital Affairs and Smart Africa Digital Academy. Despite the efforts I just mentioned, African countries are facing a real brain drain. To attract digital professionals, the public sector generally apply a specific salary policy such as the bonus founded on the base salary. This remains insignificant in a global context of digital talent shortage and the major consequence of which is a brain drain. Aware of the inadequacy of the public sector to compete with the private sector and of the attractiveness of the market for cybersecurity talent, education and training are now recommended to be included in the national strategies. The public sector is also exploring a multi-stakeholder approach involving private sector and international partners. A good example of collaboration in Togo is the Memorandum of Understanding signed with UNECA, the United Nations Economic Commission for Africa, to collaborate on establishing the African Center for Coordination and Research in Cybersecurity. And I know that efforts are ongoing to fast-track the realization. I wanted also to share the following best practices of public-private partnership is the specific case of the strategic partners between Togolese Republic and a company named Asseco Data System in Poland. This partnership gave birth in 2019 to Cyber Defence Africa. It is a joint venture company that offers cybersecurity services mandated by the Togolese Republic to ensure security information system in Togo and beyond its border. The partnership combined the CERT with a national SOC. Given the limits of the public sector in terms of its ability to keep its talent and the global shortage of cybersecurity, the way out that I see for our region to elaborate national cyber workforce and education strategies, develop cybersecurity capacity building and education plan, introduce gender diversity by getting girls interested yearly, revamp the public recruitment process and condition, expand the talent pool by collaboration, invest in training and development program to promote cyber certification, to develop the talent inside the public sector. Also, to finish, the last way out I see is to follow ITU guideline when developing the national cybersecurity strategy, especially on the cyber capability and capacity building and awareness raising. Thank you, Laura. This is what I wanted to share with you, what is going on in our region. Thank you very much.

Laura Hartmann:
Thank you very much, Marie. Your input is very well noted and appreciated. I think you’ve highlighted one point that is specifically important, the note that you put forward of the brain drain. You’re facing challenges that go well beyond the cybersecurity workforce gap. I think we should be aware when working in cyber capacity building initiatives and partner initiatives with countries from the global south that this is a multi-challenge LA and that we should be aware of that. I think now over to the fifth speaker of this panel, which is Ms. Regine Grienberger. She is the cyber ambassador at the German Federal Foreign Office and a career diplomat. Her professional path has focused on EU foreign relations, EU economic and financial issues and common agricultural policy. Dear Regine, as the FFO has been increasingly involved in cyber capacity building for the past years,

Regine Grienberger:
what would be your lines of thought on the topic? Thank you, Laura. First of all, I want to say I appreciate very much what all the other speakers on the panel have said because it is really highlighting the very complexity of this problem of how to retain the cyber workforce in the public sector. I’m speaking as a servant who is both working inside a public institution that has experience and experiences the shortage of experts, but also as somebody who is engaged in cyber capacity building. It has been mentioned cyber tracks. Marie mentioned the ECOPASS action plan and other projects that we think or we hope are helpful to establish a new or better substrate for cyber experts with our global partners. One thing I wanted to say before I advance with my notes is a discussion that I often have with my counterparts and that is, does the public sector really need cyber experts or can everything be outsourced to the private sector? We have a discussion going on both within Germany but also within the European Union and I think many of you will know this perhaps from home, the discussion about what is digital sovereignty, this ambition to have a digital sovereignty as a government or as a state, what does it demand from governments and certainly having control on our own networks as governments is one important part of digital sovereignty and so I would answer this question, is it possible to outsource cyber security to the private sector? Yes, we’ve seen very good examples where this works very well, for example in Ukraine that is relying heavily on the private sector to maintain government networks in times of war, but you also have to take care of covering your own needs with your own experts. Our cyber security agency gives the recommendation to set 15% of your digital or digitization budget aside for cyber security measures including personnel and also training and I think this is also a good benchmark if you are thinking about how much will this cost me, so 15% is a good rule of thumb of how much it will cost at least. Martina described the demands or the requests stemming from the new NIS directive, so our updated European cyber security regulation. In Germany we assume that the number of cyber experts needed for critical infrastructure will be x8 what we have now because x8 institutions will appear on the list of entities that have to follow or have to comply with the standards of this new regulation and you can imagine that this means about 10,000 of cyber experts missing within the moment that this regulation enters into force. This can only be dealt with if we take it really seriously also as public sector that we have to really find and also build these experts. I have two immediate remedies perhaps to propose. One is pooling. We recommend this also for example for you mentioned schools and universities I would say also municipalities perhaps they are too small to afford their own cyber security expert but certainly not too small to join forces with other municipalities in a similar situation and pool cyber security services for several public institutions. And the other one and that is also a lesson learned from Ukraine is moving things to the cloud makes it also much easier to take care of cyber security so I’m not recommending a specific offer from a specific corporate company but we have seen that this helps because it’s then well protected by the most advanced and sophisticated tools. A third recommendation that I would like to make is make it a little bit easier for the few cyber experts you have by raising the digital and cyber literacy of your workforce in general. So we are for example conducting a cyber security month in October actually these days to inform our colleagues about cyber threats how they are themselves high value targets for cyber criminal organizations and state actors conducting espionage operations so that they know a little bit better how to protect themselves with easy means because I mean that’s a cliche but the weakest link in the cyber security chain is always the humans so my recommendation make it easier by increasing the cyber literacy of your workforce. Okay then now what kind of experts are actually needed it was also been mentioned already so I would say and I see it in contact with my colleagues also from our IT department you need technical experts and you need also people who are able to speak the language of management and hierarchy so that the higher up levels of management of a ministry for example for example, understand the need and the requests, understand also the need to invest and to raise these costs because it’s a costly exercise to improve cyber security. So I think by, you know, by hiring one person that can actually speak the language of management, you might be able to free money to hire 10 more experts who then do the groundwork. And then in exchange with my colleague from the IT department, I also learned that they’re basically always hiring people that are not up for the job that they are meant for. So they are always hiring people who have generic, more generic knowledges than needed, but then are upskilled or reskilled on the job or so by their colleagues or by short-term cyber security reskilling or upskilling programs that we buy from the market. So then there was this issue of competition with the private sector. Of course, mostly it will not be the case that money and salary offerings are adequate to attract the workforce. But purpose is. Purpose is really an important thing. And purpose is not only, you know, the recognition by the higher levels, but also understanding at which part of the machine you are actually working. So a more holistic view of cyber security might also help to retain people in the sector if they understand that in the public sector they are allowed to really contribute to a bigger picture that they also understand. Plus, job security, flexible work arrangements, I think, especially for women, also the particular protection that civil servants have in the public sector is something interesting. Of course, this is not for all workplaces, but for some, for example, for our ministry, that is an argument for women to join the foreign office and not a private sector company. Then there is also this, so there is this idea, it’s not worth training experts in the public sector because they will be stolen by private companies afterwards. So I don’t invest. So I would recommend, don’t think in these terms. Think of it in terms of job rotation. So you train the people as public sector, you release them to private companies, and you gain them back at a later stage of their career. I think this is particularly true in all the field of IT experts. They have so many opportunities and usually are also curious people. So you should let them look at other opportunities and perhaps gain them back. And my last point is we are in a very transformative period of time with regard to IT digitization and cybersecurity. So our job profiles and the educational profiles, and somebody mentioned it, was it Marie or Laura, these, you know, these curricula that we have in high schools and graduate schools and universities and business schools are perhaps not up to date. So we would, we should work with our, for example, in our case, Ministry of Education and Ministry of Labor to update also the job profiles, educational profiles, so that the institutions are really also able to produce the kind of, you know, knowledgeable people that we need.

Laura Hartmann:
Thank you. Thank you very much for this comprehensive input, Regina. And I will directly give it back to our last speaker joining virtually. She’s Laura Pace. Laura has just under 15 years of international experience working in building cybersecurity capacity across the world. And she has worked for multilateral and national government academia and is now working for PGI in the private sector, where she’s the head of the capacity building practice. Laura, it’s my pleasure to hand it over to you.

Lara Pace:
Hi, Laura, good afternoon. It’s been fascinating listening to my colleagues and understanding all these initiatives that are on the way. I guess you’re quite tight for time. Yes, that’s correct. Can we can we do overtime by five minutes, please? Yes. OK, so I think I’m going to leave I’m going to leave you with a couple of points. So essentially, I’ve been working internationally for 15 years and the beginning of my career really focused on developing governance structures and cybersecurity strategies and really creating plans at the national level. And now, having done that for so many years, I’m focused on essentially doing the same, but in helping governments really build the human resource to implement those strategies. There were a couple of points that I I I I picked up on, which is now the ambassador’s point about jobs rotation, which is fundamental, I think this might sound a little bit controversial, but I think if you have skills and expertise in the public sector that suddenly moves into the private sector, that could fundamentally be seen as a positive. I’m in no way encouraging brain drain here, but essentially responding to cyber attacks and cyber incidents requires a whole ecosystem approach. So suddenly you’re sat in the civil service and you are working with private sector individuals that have been trained by the public sector and also understand the challenges of the public sector. So I’m now sat in the private sector thinking about the challenges for the national perspective and can offer interesting solutions. So that’s one point I wanted to make. And I really agreed with Yasmin’s contribution in terms of retention of skills in the public sector, in terms of really creating an inclusive environment, having very clear career pathways so people can understand where they can progress, because as human beings, we all want to progress and better ourselves both personally and as an organization or as a national institution. And the last thing is incentivization. And that does not necessarily equate with more money. The last point I wanted to make was, I think, and sometimes I’m guilty of this, as a cybersecurity professional working internationally, I think, you know, cyber is the ultimate priority at a national level. And actually, we really need to consider if we are going to make interventions in terms of skilling up and training, that there is also a similar initiative happening to ensure that the jobs are being created to retain that talent, especially in emerging markets. We get a lot of requests and we see a lot of RFPs for governments to have like skilling programs. But sometimes that happens in like a silo. And what happens is you have this very intense sort of skilling up program, and then the expertise does not remain within that geography. So I think it’s really important that it has to be a two pronged approach or maybe not a two pronged. I think about capacity building as a 1980s hair comb. You know, each tooth has to come all together like in a national coordination effort. Yeah, I think I wanted to leave you with just those key, key, key points. We work from Latin America and the Caribbean all the way to the Pacific, helping governments scale up. So, yeah, I think somebody mentioned academies, which is one of the things that we do. But I thought I would just leave you with those two comments because I know you’re very tight for time. Thank you very much, Lara.

Laura Hartmann:
So if we have some more minutes, can we allow for a question from the from from the from the audience? Or is there even a question from the audience? Please come in. Yes.

Audience:
I’m from the government of Sri Lanka, and we closely got many benefits from EU and some of the partner countries in capacity development and including of the development of the cybersecurity strategy and the policy. So we developed the policy for the five years. So then now the challenge is to to implement it so that we feel that I’m a civil servant for the past 23 years. So by the time the word ransomware came to to to the to the to media. So so we we just grab it. But nobody in the public sector had hardly heard of it. And we got the private sector experts to explain. So time to time we have the collaboration with the private sector. And and and and we have also in a separate track, the military and the forces, armed forces, they have their own kind of cyber defense. But we keep it in a very strategical way to bring their knowledge to the normal civil service work and other work. Somebody mentioned about the capacity building and curriculum change. I think I’m the remote speaker. I think we we gave this into the most of the ICT or or digitalization related curriculum school and academia. But we as a country face a challenge of losing the talent from the market to going overseas, a private sector itself. So we are a small country of 20 million. We we have a currently in the IT industry has about 30,000 vacancies for the graduates so that there’s a challenge for the private sector itself. So we the without a collaboration, the capacity building within the government itself won’t be a won’t be a sustainable solution. That’s the way I see it.

Laura Hartmann:
And and yeah, it’s. Thank you very much. Any of the speakers would like to come in or leave it with. The note agreement. OK, so then I would like to thank you all for listening. Thank you very much to our speakers joining virtually at the very early morning in Europe and in Africa. And thank you very much to the speakers here on the panel. And you want to make note. Yes, please. Hello. Hi, thank you. We are hearing to you lots of things you are discussed, but we know that many of the time

Audience:
government and private sector has a hacking. But I want to know how to fully save any government or any private sector by hacking. Can you tell me something? So the question was, if I understand correctly, how to fully secure the government system from hacking attacks? I can try being very technical. I started my career in a very technical level, and now I’m at the leadership level. There is nothing which is impossible to hack in the world. And there will not be because digital world is imperfect. Always there will be weaknesses which can be used to hack systems, different types of systems, maybe Pentagon, I don’t know, White House. We’ve seen such kind of activities and we’ve seen in the future. There is nothing 100 percent possible to be safe. And technology always has weaknesses. OK, thank you very much. I think we need to close the session now to give the floor to the other session

Laura Hartmann:
that’s coming and that’s about to take place here. So, yeah, I think let us just all agree that really a holistic approach is very, very important. So the so beginning from the education and then an ecosystem approach that our last speaker has voiced. Lara, I think that’s fundamental. So cross industry initiatives, really, so that we go from a nice to have to public to really raise the awareness that it’s a public safety issue as well. And ultimately, yes, we need people that can talk to management, that understand that there’s the need for investment and that translate this. So thank you very thank you very much, everyone. And happy IGF. Thank you. You’re an inspirational talk. Yeah, I. But here at the venue. You have a very tight schedule. There is. Very much more great. Here. But. Would the left of the.

Audience

Speech speed

177 words per minute

Speech length

498 words

Speech time

169 secs

Komitas Stepanyan

Speech speed

169 words per minute

Speech length

942 words

Speech time

334 secs

Lara Pace

Speech speed

172 words per minute

Speech length

668 words

Speech time

233 secs

Laura Hartmann

Speech speed

138 words per minute

Speech length

1716 words

Speech time

744 secs

Marie Ndรฉ Sene Ahouantchede

Speech speed

113 words per minute

Speech length

1047 words

Speech time

558 secs

Martina Castiglioni

Speech speed

139 words per minute

Speech length

1716 words

Speech time

742 secs

Regine Grienberger

Speech speed

138 words per minute

Speech length

1433 words

Speech time

621 secs

Yasmine Idrissi Azzouzi

Speech speed

162 words per minute

Speech length

1110 words

Speech time

410 secs

Future-Ready Education: Enhancing Accessibility & Building | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis reveals several important points regarding the need for improvements in education systems and the impact of technology on learning. Here is a more detailed summary of the main findings:

1. Nepal requires more practical and skills-based education to enhance employability. Despite having years of formal education, Nepalese students struggle to find employment. However, short-term skills courses have shown to lead to employment opportunities with higher wages in foreign countries. Therefore, there is a strong argument for incorporating practical and skills-based education to better prepare students for the job market and increase their employability.

2. It is crucial to incorporate digital literacy, digital skills, and re-skilling in the education system. Pedagogical changes are necessary to shift from traditional teaching techniques to modern, skills-based methods. Additionally, the proposition for ‘finishing school’ concepts in Nepal highlights the need for teaching relevant and practical skills that align with the demands of the digital era and enable students to succeed in the current job market. In summary, the integration of digital literacy and skills is urgently required in the education system.

3. The youth express concerns about AI readiness and the ethical use of AI tools in education. University students are interested in using AI tools such as ChatGBT to assist with homework. However, questions arise regarding ethical guidelines and best practices for the use of AI in education. It is necessary to address these concerns and ensure that the integration of AI tools in the learning process is responsible and beneficial.

4. The role of individuals and youth in promoting digital literacy is questioned. It is important to understand the actions that individuals can take to contribute to the development of digital literacy. Fostering a culture of continuous learning, digital skills development, and active engagement with technology among individuals and especially the youth is crucial for promoting digital literacy and bridging the digital divide.

5. Finding digital solutions for remote locations to implement AI and digital tools is of utmost importance. In the case of the Philippines, which comprises over 7,000 islands, many remote locations lack internet and utility services. It is essential to develop initiatives and tools that can bridge this digital divide and provide access to AI and digital technologies in under-served areas. This will help enhance education opportunities and equalize access to resources for students in remote locations.

6. Specific initiatives and tools are needed to help under-served, remote schools access AI and digital technologies. The Philippines has numerous remote and under-served schools that require dedicated efforts to provide them with access to educational technology resources. Such initiatives will ensure equal opportunities and bridge the digital gap between urban and rural areas.

7. While the internet and technology themselves are neutral, their usage can be potentially harmful. Educating individuals about responsible and safe technology use is crucial to mitigate potential negative impacts. Promoting digital literacy, online safety, and critical thinking skills will empower individuals to navigate the digital landscape responsibly and safely.

8. The multistakeholder model is critical for inclusive decision-making. Inclusive decision-making requires input from multiple stakeholders to ensure diverse perspectives are considered and social inclusivity is promoted. By involving various stakeholders, more comprehensive and effective solutions can be developed to address the challenges in education and technology.

9. Resilience in digital education requires inclusive design, acceptance of diversity, and empathy. To ensure that digital education is accessible and beneficial to all learners, inclusive design principles are essential. Considering a variety of user needs and creating learning environments that embrace diversity and foster empathy will enable all students to benefit from digital education resources.

10. Community involvement is crucial for a better-shared future. Learning from each other as a community can lead to progress and enrich the educational experience. Active involvement of communities in educational activities and decision-making processes nurtures a sense of ownership and shared responsibility, contributing to the overall improvement of education systems.

11. Promoting inclusive, equitable, and quality education through the internet is important. The Internet Society’s special interest group on education focuses on advocating for this cause. By leveraging the internet’s vast potential, opportunities can be created to provide quality education to all individuals, especially those who are marginalized or face barriers to accessing traditional education systems.

In conclusion, the analysis highlights the importance of practical and skills-based education, the incorporation of digital literacy, the ethical use of AI tools, and community involvement in enhancing the quality and accessibility of education. Furthermore, it emphasizes the significance of inclusive decision-making, resilience in digital education, and promoting digital literacy. Addressing these concerns and effectively leveraging technology will create more inclusive and equitable opportunities for learners worldwide.

Vallarie Wendy Yiega

In the analysis, the speakers highlight the importance of future education being skills-oriented to prepare students for emerging careers. They argue that the shift from regurgitation-based learning to critical thinking and creativity is essential. Furthermore, they discuss the impact of artificial intelligence (AI) and digital tools on education methods.

The speakers also emphasize the need for practical steps beyond policies and legislation to be taken by governments and organizations. They provide examples such as the Universal Service Fund in Kenya, which focuses on providing internet access, and stress the importance of accountability and monitoring in policy implementation.

The accessibility of low-cost devices and internet connectivity is deemed vital for education. The speakers mention telecom players in Kenya partnering with the government to provide low-cost devices and highlight the role of the internet in accessing education tools and platforms.

The analysis also underscores the importance of equipping educators with the necessary digital skills. The need for curriculum integration with digital subjects is identified, and the challenge of the digital skills gap among educators is acknowledged.

The establishment of digital libraries and cross-border collaboration in education is seen as necessary. However, further details or evidence supporting these arguments are not provided in the analysis.

Infrastructure is identified as essential for implementing digital education. It is noted that urban areas often have better access to digital tools, creating a divide with rural regions. The analysis also highlights how infrastructure issues can hinder efforts to understand digital tools. Collaborations with internet service providers and private companies are considered crucial for infrastructure development.

Data privacy and cybersecurity are raised as concerns. The speakers refer to a school that was fined for inappropriate use of students’ images for advertisement, and they note a lack of awareness among educators regarding data protection obligations. Firewalls and data protection measures are suggested as necessary in schools.

Continual professional development and reskilling of educators regarding new technological tools are emphasized. The analysis suggests the need for resources to be created for regular skilling and reskilling, and training on new technologies, such as generative AI, is recommended.

The potential positive and negative impacts of generative AI tools in education are discussed. The analysis highlights that AI can assist in tasks such as drafting emails while adding value without replacing humans. However, it also states that understanding how to use generative AI tools ethically and responsibly is essential.

The analysis includes a quote from a tech lawyer who is favorable toward the use of technology for positive impact, suggesting a pro-technology stance.

Self-education in the field of internet governance is seen as crucial. The analysis mentions that the Internet Society offers online courses to engage in the internet governance space.

Understanding the local context is considered necessary for successfully navigating in internet governance and achieving change and impact.

Joining relevant youth organizations is recommended for enhancing skills in navigating the internet space. The analysis mentions an organization in Asia that has helped build communities, advocate for digital literacy, and provide opportunities.

Persistence and continuous engagement in the space are highlighted as factors that can lead to a better understanding of digital literacy and internet governance.

The analysis emphasizes the importance of carrying this generation of digitally skilled learners into the future. Each-one-teach-one is suggested as a mantra to ensure that everyone learns digital skills.

Lastly, the speakers advocate for contribution through policy-making, building innovative solutions, and raising voices for a future-ready digitally skilled education system.

Overall, the analysis discusses various aspects of future education, including the need for skills-oriented learning, digital access and infrastructure, educator training, data protection, AI tools, and internet governance. It highlights the potential positive impact of technology but also emphasizes the importance of responsible use and continual professional development. The analysis provides a comprehensive overview of the main points and arguments surrounding the future of education.

Ananda

The analysis explores several key aspects of the intersection between technology and education. One important point highlighted is the importance of reskilling educators and contextualising technology in the local context. This emphasises the need to equip educators with the necessary skills to effectively incorporate technology into their teaching methods and adapt it to suit the specific needs and challenges of their students and communities. The argument stresses the significance of this reskilling process, emphasising that it is vital for preparing educators to thrive in the era of Industry 4.0.

Another significant aspect highlighted is the role of multi-stakeholder engagement in the Internet Governance Forum (IGF) and the collaborative effort required to build a sustainable ecosystem. The analysis emphasises that effective policies and initiatives in the technology and education sectors require the active involvement and support of the government, civil society, and the private sector. It argues that the collective efforts of these stakeholders are essential for creating an enabling environment conducive to the successful integration of technology in education.

The potential of community networks and community learning centres in providing internet connectivity is also explored. The analysis points out that these networks, owned and managed by the respective communities, are particularly important in areas where there is a lack of connectivity. An example from Africa is highlighted to demonstrate how community networks can bridge the digital divide in underserved regions. This suggests that the establishment of such networks and learning centres can play a crucial role in expanding internet access and promoting knowledge-sharing in remote and marginalised communities.

Furthermore, the analysis emphasises the value of open courseware in rural technology and its role in improving access to quality education. It mentions initiatives like the Rachel Foundation and Khan Academy as examples of platforms that offer open educational resources. These repositories provide free and easily accessible educational materials, which can be particularly beneficial for individuals in rural areas who may face challenges in accessing traditional educational resources.

An important observation made in the analysis is the need to involve and empower youth in expanding internet access and making it more inclusive. The analysis asserts that young people are the most significant stakeholders in the internet and have a crucial role to play in improving its accessibility and inclusivity. By encouraging youth participation and giving them opportunities to contribute their perspectives and ideas, the analysis argues that the internet can become a more inclusive and empowering tool for all.

In addition to these key points, the analysis also mentions the existence of open source repositories such as Rachel and Colibri, which provide educational resources that can be broadcasted or transferred offline. It highlights the benefits of these repositories, including regular updates and the ability to share educational content without internet connectivity. The analysis concludes by emphasising the need to investigate and implement feasible technological solutions like Rachel and Colibri to meet the demand for education resources. It mentions the feasibility study conducted by Ananda and their team, who are seeking funds to upgrade the deployments of these resources.

Overall, the analysis provides a comprehensive overview of the different aspects of technology and education, highlighting the importance of reskilling educators, multi-stakeholder engagement, community networks, open courseware, youth involvement, and open source repositories. It offers valuable insights into the potential of technology to enhance education and emphasises the collaborative efforts required to ensure equitable and inclusive access to educational resources.

Binod Basnath

The analysis emphasises the need for robust digital education policies in Asia. It suggests that governments should have a wide vision and mission in order to develop these policies. It highlights the experience from the COVID-19 pandemic, which has had a significant impact on education, as evidence for the need for resilience in education systems. The analysis also stresses the importance of adequate infrastructure development. It points out that in Nepal, only a third of community schools have minimal digital resources. Additionally, post-COVID, only 36% of Nepal has broadband connectivity, falling significantly short of the 90% target.

Inclusion is identified as a vital aspect of ensuring no one is left behind in digital education. The analysis argues that inclusion should be embedded from the design to the implementation of learning practices. It points out that without inclusive educational design, vulnerable communities are at risk of being left behind.

Digital literacy and competence development are deemed indispensable in digital education. The analysis highlights the need for content in local languages to cater to local needs. It also highlights that without digital literacy, students, parents, and teachers will struggle to implement digital education programs.

The analysis concludes that a comprehensive approach is needed to build digital education resilience. It advocates for well-planned and inclusive policies, adequate infrastructural development, and competence development. It highlights the pivotal role of competent governance in foreseeing and preparing for the challenges of the digital education system. The analysis also points out a gap in infrastructural development and competence for ICT usage in the education sector in Nepal.

Another argument presented in the analysis is the disparity in employment value for formal education and technical skill training. It mentions a case where a student in Nepal found a high-paying job in Japan after three months of specialized training, but struggled to find a job in their home country after around 15-20 years of formal education. This highlights the need to produce a workforce that caters to the needs of the modern technology era, as currently, young people are not getting jobs due to a lack of required skills.

The analysis also discusses the importance of digital methods in the learning system. It suggests the need for a digital curriculum, digital pedagogy, and a digital means of assessment system to match the pace with Industry 4.0.

The analysis highlights youth participation in Internet Governance Forums as a means to advocate for necessary changes in the digital education landscape. It encourages youths to take their competency back to their communities to empower more youths with digital competency and literacy.

Noteworthy observations from the analysis include the implementation of ICT resource units in Nepal, which create an internal networking system for communities and enable sharing of information through voice calls, video calls, and messaging systems. The analysis also mentions the pilot project of a locally accessible cloud system in the Philippines, aimed at being used for education and health sectors for marginalized and backward communities in Nepal.

The analysis calls for more awareness among policymakers about the use of ICT in education. It suggests that if implemented correctly, ICT education can be more inclusive and accessible. It highlights the need for policymakers to be aware of an ICT education master plan, as this can be an effective tool to reach education goals. The analysis notes that Asian countries are moving towards a second ICT education master plan.

Ashirwa Chibatty

The analysis of digital education and equitable access to the internet reveals several important points. Firstly, it highlights that although the internet is meant to be accessible to everyone, access is not distributed equally. This raises concerns about the fairness and inclusivity of digital education.

One major challenge in the digital education ecosystem is the language barrier. Many digital content and resources are primarily available in English, which may not be the first language for a significant proportion of the global population. This language digital divide hinders individuals’ ability to fully engage and benefit from digital education.

Another challenge highlighted is the existence of skill gaps for digital teaching and learning, as well as industrial skill divides. These gaps limit individuals’ capacity to effectively utilise digital technologies for educational purposes. Bridging these gaps is essential to ensure that everyone has equal opportunities for quality education in the digital age.

Equitable access to digital education requires overcoming various challenges related to accessibility, literacy, assessment, and security. According to an IEEE essay, Ashirwa Chibatty outlines four pillars: accessibility, literacy, assessment, and security, which are essential to addressing these challenges. Ensuring that digital education is accessible to individuals with disabilities, promoting digital literacy, implementing effective assessment methods, and ensuring cybersecurity are crucial components of equitable access.

The analysis also shows that gender disparities exist in accessing and utilising digital technologies. Women and non-binary individuals face more exclusion due to socio-cultural norms. As per GSMA’s State of Mobile Connectivity Report 2022, women are 20 percent less likely than men to use mobile internet. Addressing these gender inequalities and reducing digital divide along gender lines is crucial in achieving equitable access to digital education.

The multistakeholder model is emphasised as being crucial when dealing with technology. The involvement of various stakeholders, including governments, educators, technology providers, and communities, is essential to ensure that the use of technology in education is equitable, inclusive, and aligned with the needs of all learners.

Inclusivity and diversity are also highlighted as important considerations in the design process of digital education. Recognising and valuing different perspectives and experiences can lead to the development of more inclusive and effective educational technologies and platforms. Ashirwa Chibatty advocates for learning from each other, being empathetic, and working as a community to drive progress in digital education.

Ultimately, the aim is to achieve a global internet that promotes inclusive, equitable, and quality education for all. Ashirwa encourages individuals to join Internet Society’s special interest group on education, highlighting the importance of collective efforts to advocate for an inclusive and equitable education via the internet.

In conclusion, the analysis underscores the need for equitable access to the internet to ensure inclusive and quality digital education. Language barriers, skill gaps, and gender inequalities are among the challenges that need to be addressed. The involvement of multiple stakeholders and the consideration of inclusivity and diversity in the design process are essential for achieving equitable access to digital education. Creating a global internet that supports inclusive and equitable education is a shared responsibility that requires collaboration and commitment from all sectors of society.

Umut Pajaro Velasquez

The COVID-19 pandemic has exacerbated the digital divide in Latin America’s education system, particularly in rural and marginalized communities. These communities face a lack of access to digital resources and tools for education, intensifying existing inequalities. Due to lockdowns and school closures, the reliance on digital education has significantly increased. However, many students in underserved areas lack the necessary devices and internet connectivity for effective online learning.

To address this issue, Latin American governments have taken steps to promote internet access in rural areas. Laws have been enacted in Mexico, Colombia, and Argentina to prioritize and support community-driven internet accessibility. These efforts aim to bridge the digital gap and provide equal educational opportunities for all students, regardless of their location.

Monitoring and accountability of resources is crucial to improving internet and device access. Misuse of resources intended for enhancing digital access is a challenge that needs to be addressed. Implementing programs to monitor and ensure proper utilization of these resources is essential for effective implementation and equitable outcomes.

Teacher training is vital in delivering quality education, especially in digital learning. However, many teachers were ill-prepared to use digital tools during the pandemic. Tailored training programs that address their specific needs and equip them with the skills to effectively use digital resources for teaching are essential.

Digital literacy is another key aspect of modern education. Developing after-school programs and online resources and incorporating digital literacy into the curriculum can help students acquire skills necessary for success in the digital era. Digital literacy programs should focus on competencies such as problem-solving, critical thinking, communication, and teamwork.

As reliance on digital education increases, cybersecurity infrastructure in schools and educational institutions becomes paramount. Educators and students need professional development opportunities to enhance their understanding of cybersecurity best practices. Implementing strong firewalls, intrusion detection systems, and other security measures is crucial for safeguarding sensitive data and ensuring online safety.

Ethical and legal implications of integrating artificial intelligence (AI) into education should also be considered. While youth are aware of AI’s potential, they may not fully understand its ethical and legal aspects. Educators should teach students about the ethical considerations and legal frameworks surrounding AI use to ensure responsible implementation and usage.

Building human capacities, such as critical thinking, in AI education is important. Emphasizing critical thinking and problem-solving skills can help students navigate the changing landscape of technology and utilize AI for positive outcomes.

Voice plays a crucial role in advocating for desired technologies and effective implementation. Through participation in policy-making processes, individuals can contribute their perspectives and shape the development of technology infrastructure in education.

In conclusion, education’s future entails constant digital transformation and adaptability. Addressing the digital divide and education inequality is crucial, particularly in the global south. Ensuring access to necessary resources, such as internet connectivity and devices, while developing the skills and capacities required for success in the digital era is essential. By doing so, an inclusive, equitable, and technologically proficient education system can be fostered, preparing students for the challenges and opportunities of the future.

Session transcript

Ashirwa Chibatty:
Thank you very much, and now I will turn it over to Mr. Ashirwa Chibatty. Good morning, everyone. I’m Ashirwa Chibatty, the chair of Internet Society’s special interest group on Internet for Education, and today I will be moderating and organizing a workshop. This session is for all of us to move towards a global Internet that ensures inclusive and equitable quality education and promotes lifelong learning for all. So without further ado, let me introduce my speaker, Mr. Binod Basnath. Mr. Binod Basnath is co-founder and director of Educating Nepal and Empowering Asia. He is an MPhil graduate from Kathmandu University in development studies with his focus on education. He is a researcher in the field of digital and inclusive education. He was APRIGF fellow in 2017 and Austrian awards alumni since 2019 upon completion of a course on inclusive education and policies and practices from Queensland University of Technology Australia. He is also an Australian awards impact ambassador for Nepal upon his efforts for digital education post-COVID-19 pandemic in Nepal. He is a member of Internet Society’s accessibility standing group, and he is fluent in English, Nepali, and Hindi, and he also speaks a little bit of broken Japanese, I guess. Mr. Binod, please speak a little bit of Japanese. The next talented figure we have here is Ms. Valerie Yeager. She’s an advocate of high court of Kenya and Internet governance lawyer and a tech policy analyst. She currently works as an associate in intellectual property and technology media and telecommunications team at Bowman’s law firm. She was a youth ambassador at the United Nations Internet governance forum held in Poland, a youth volunteer at IGF in Ethiopia, as well as a youth leader for declaration of the future of the Internet under the European Union and Czech Republic. She has also been a fellow with Internet Society, ICANN, AFRINIC, and Kenya school of Internet governance. She was an ambassador for digital grassroots, a youth-led community in charge of building awareness around digital rights in Africa. Valerie, too, is multilingual, and she fluently speaks English and Swahili, and believes in being a woman in the area. She probably watches too many Korean movies, so she has a little bit of Asia in herself as well. So Binod is from Asia, and Valerie is representing Africa at the moment. And joining us online, we have Umut Pajaro-Velquez. They have a BA in communications and an MA in cultural study, and currently works as a researcher on issues related to digital rights, ethics, and governance of AI. They are focused on finding solutions to biases towards gender, race, and other forms of diversity that are often excluded or marginalized in the constitution of data that feeds these technologies. They are the chair of gender-standing group of the Internet Society and the coordinator of youth like IGF and Youth IGF Colombia. They also chair the gender-standing group of ISOC, and they are fluent in English and Spanish. That’s why we often use them as a translator, and he provides his translating services for free. We also have Shraddha as our online moderator from the same SIG, Internet for Education. So, without further ado, I would like to move on to the next slide. So, we say that the Internet is everyone. In Internet Society, we believe that the Internet is for everyone, but there are some food for thoughts for you. There are some things, there are some questions that we need to ask ourselves and within our community. Those are like, does everybody have equitable access? We say the Internet is for everyone, but is access equitable? What is meaningful connectivity, and what is digital poverty? These are the things that we need to ask ourselves when we talk about Internet, and when we talk about education for all, and Internet for all. So, when we talk about digital education, before I move into my slides, the flow of this session would be briefest at the stage, and then move towards our speaker. There are a few questions that we need to address, and our speakers are from diverse regions, from Africa, from Asia, and from Latin America and Caribbean, so we hope to have a diverse voice here. So, there are certain challenges when it comes to digital education ecosystem. So, what are those challenges? The first one is language digital divide. A lot of content that are available on the Internet are in English language, which might not be the first language of everybody. Actually, it’s not the first language of most of the people, and there are some people in our area who are not that much fluent in English, so that’s one of the challenge to quality education. The next challenge is lack of skills for digital teaching and learning. So, post-COVID, everybody, we moved towards digital education. Everybody was focused on work from home, online classes, and during online classes, the teachers and administrators didn’t have that adequate knowledge and skills for teaching and learning. And the third one is the industrial skill divide. We’re moving towards fourth industrial revolution, how we say industry 4.0, education 2.0 for industry 4.0, so how do we cater those needs? There still is a lot of divide among that, and what that is doing is it’s furthering the digital divide, and that’s not what we want. So, moving further, I would like to share the IEEE’s essay Industry Connections Report on Digital Resilience. You can scan the QR code for the full report, but when we see about the challenges, we have four levels of pillars for challenges. One relates to accessibility, that connects to infrastructure, connectivity, and language divide. The second one is on literacy that focuses on digital content and solutions, skills for teachers and learning, and the industrial skill divide. The third comes the assessment. How do we measure the quality of learning, and how do we engage a learner in online space? And the third one is the challenge of security, cybersecurity, human resilience, building human digital resilience that’s most important, and what are the future implications that we might bring when we are shifting the whole world towards the blended form of education? And, again, there are people who do not have Internet connection, so they cannot get education. There are people who have Internet connection, but they’re not very much used to it, so they don’t know how to use it. And the third one is those who know how to use Internet, those who are very much active Internet, are very much prone to cyber risk, and when we talk about education and bringing our young kids into the space, we have to be very careful about those. And, yes, the social cultural norms that restrict the role of women and girls in society hinder their access to the use of digital technologies. And as per GSMA’s State of Mobile Connectivity Report 2022, worldwide, women are 20 percent less likely than men to use mobile Internet. And now when we talk about gender, it’s not a binary. It’s not zero and one. There are a lot of spectrums, and if a woman are 20 percent less likely, the non-binary gender, they are more prone to it. So with that, I would like to move directly to our first question that we would like to address. It will be a different session. We would be asking questions, and the speaker would obviously share their experience and set the stage, but we’d also want more interaction coming from the audience here so that together we can learn more and do something for the betterment of society. So with that, I move to my first question. How can governments organize and ensure equitable access to digital education infrastructure in Asia Pacific, Africa, and Latin American Caribbean region? First, to set the stage on this question, I would like to move to Binod Basnet to share his experience from Asia’s perspective.

Binod Basnath:
Thank you, Ashwath, for the question. Before I address the question, I’d like to welcome all of you to this session, to all those who are participating here at Kyoto International Conference Center, and those participating online. Thank you all for being here, and I do hope for a very proactive participation and engagement of everybody throughout this session. Coming back to the question, the question actually does not have a rigid answer. Well, the question in itself is very broad, and I cannot take much time on elaborating every aspect. I’ll try to be as precise as possible. I’ll try to sum this up within four points. So talking about having a resilient digital education for each economy, especially for Asia, on my behalf, it will be much more about Nepal, because that’s where I’m from, and that is the context that I’ll be bringing in more. So it won’t be just Nepal. It will be representing many least-developed countries or developing nations as a whole. So for the first part, I think it’s very important for a nation, for a country, for its governance to have a wide vision and mission, and this also coincides with the researches done by ISOC as well. Until and unless we have a good vision, we cannot bring in good policies for the nation. Especially for Nepal, when we have just moved into a federal system of governance since 2015, we’re quite young with the federal system. We moved in from constitutional monarch, and the powers and responsibilities have been dispersed amongst three tiers of government, central, provincial, and local. So different aspects, different policies, and different duties have been assigned to different tiers of government, and we still have to make much more policies and programs that help each government understand what their roles are. So that is one aspect that we need to think about, and especially after the COVID-19 pandemic, we’ve understood that we had a huge impact of COVID-19 on education as well, especially for the LDCs. It was a hard time for education, and it’s not unsafe to say that actually remote education was something that prevented a complete meltdown of education during the lockdown periods of COVID-19. Saying that, for Nepal, instead of use of internet for education, use of radio, use of television were more effective than the internet way of education, because we did not imagine this earlier, and we’re not prepared for it. And that was the same case for all other developing nations as well. So the policies that were devised before COVID-19 has to be reconsidered and re-evaluated. Similarly, when we had an earthquake in 2015, there was disruption in education, but then the government came up with different building codes and different modality of learning. But after COVID, I think we’ve forgotten a lot about disasters, and we’re going back to our normal lives, forgetting what we had to change for education. And it’s easy because we have the Education 2030 Plan, the SDG4, and what its targets are. It’s easier for government to align ourselves with those targets and meet those targets. So the first point for me is a proper vision and mission. The second point is infrastructural development. Of course, without proper infrastructure, we cannot imagine the new way of learning, remote education, hybrid learning, or blended education. This requires proper infrastructure. Talking about Nepal again, we have around over 35,000 schools in Nepal, 27,000 of them being community schools, 6,000 of them being institutional or private schools, and over 1,000 being religious schools. Maybe the private schools by themselves are quite well off in comparison to the community schools. When we look at the data, we have bare minimum of one-third of those community schools that have minimum infrastructure for ICT. Now, having infrastructure for ICT is one point, and adopting it for education and other uses is another point. Even having infrastructure may not be enough if we’re not using it because they are just medium. And when we look at the internet penetration rate, we had a huge target of reaching 90% broadband connectivity by 2021, but post-COVID, we’ve just reached to around 36%. So without infrastructure and without its implementation, we cannot imagine the new modality of learning for schools and children. My third point will be one of the most important pillars, and that’s inclusion. We need inclusion for everyone because anyone can be a person with disability if we are not provided with right infrastructure support or any other forms of support. So we have the target of not leaving anyone behind. So when we design any curriculum or any learning practices, it should be inclusive from the design to its implementation and any other thing that is there beyond. So inclusion for PWD, IDPs, women, marginalized communities, vulnerable communities, and gender, and many other, those are the things that has to be considered from the beginning till the end. And the third aspect is, of course, it is connected with, again, infrastructure, but it’s about content, it’s about competence, and it’s about skills. We need contents that can be used for digital education, and we need them in the language that are, we need them in different languages that tailor to the local needs of the people. And talking about competence without teachers, students, parents, and everyone having competence for digital literacy, it’s very difficult to implement these programs in the schools or communities. And I think I’ll come back to this point for the other questions, but I think I sum up my answers within these four points. Thank you.

Ashirwa Chibatty:
Thank you so much, Vinod. Of course, the policies that we make in Nepal, from my personal experience, are good sometimes, but we also need to be realistic more than idealistic when it comes to educating kids, because Internet and digital technology are just tools, and without human interaction, the basic education needs for young children cannot be fulfilled. And being said that, I think there’s a lot of things that echoes with Africa as well. So, Valerie, to you, how are things in Africa, and how do you think that the African government, African union organizations are doing, and what can they do for equitable access to education?

Vallarie Wendy Yiega:
Thank you so much for that question. I think a lot that has been said by Vinod is very similar to what happens in Africa as a continent, but also in my country, Kenya, where I come from. So, because he’s handled it very well, I’ll just give you contextual examples of what happens and why we’re talking about being future-ready in terms of education and the skills that you’re going to get. Because I think we’re coming from an era where education was just given to students. You have to get the information, get the content, and regurgitate the same for, say, exams or passing tests. But now we’re looking into a future that is very skills-oriented, looking into a reimagined future where we’re getting careers that were not there previously. So how can government and organizations come in to ensure that we have a future-ready form of education? And one thing I’ve seen is that it’s a lot about policies and legislation, but more the implementation and the practical steps to get us there as opposed to just putting the law as it’s written, but it cannot be implemented. So I’ll give you an example. What we have in Kenya is what you call the Universal Service Fund. And I know it cuts across, because I’m sure Uganda has something similar as well, so it cuts across some of the African countries and globally as well. So what this fund does is that a lot of the companies that work around technology or telecommunication then donate to this fund in partnership with government to ensure there’s accessibility and access to the Internet. And I think over the years, it’s been a fund that has been slow to be taken up because there’s been no accountability and monitoring of how the fund is performing, is the money going into the fund, is the fund being practically implemented across these regions that require Internet access. But I think now what we are seeing, especially with our government, with our Ministry of ICT in particular, is that they’ve put systems in place to ensure that there’s accountability and monitoring of this fund to ensure that we get to that goal of Internet access, especially in rural areas. I’d like to connect what my co-panelist said earlier about what COVID did. Because if you look at it, I think if we look at life generally, as the studies that we do, we may tend to forget our why. If we look at the impact that is made across and over time, then we better understand what this impact is. So I’ll give you an example. During COVID, the people who were staying in the urban areas were able to continue their education because they were connected to the Internet, but those who were in the rural areas, because of lack of the Internet as well as issues such as power connectivity, they were not able to continue. So what that meant, especially now with the tagline of leaving no one behind, is that we potentially left a number of students who have a gap that they need to fill in order to get to where the students who are able to continue their education seamlessly are now at. And this, over time, creates a situation where you have a form of a global south, but one that is heavily impacted by education. You already have a literacy gap within the countries that are already suffering from a lot of developmental issues. So this is one of the things that, from a policy perspective and from a legislation perspective, is very important to understand to monitor what that impact is. Because once you’re able to monitor where that impact is and you’re able to know where the gap was left, then you’re able to make steps towards ensuring that that gap is filled. I’ll give you another example. I’m also a telecommunications lawyer, so I work in the telecom space. And what we found is a lot of the telecom players back in Kenya, what they’re trying to do now is partner with the government to ensure these low-cost devices that can access to the internet. Because it’s one thing to have an access to the internet, but to lack the device that actually helps you connect to the internet and get that skill or that education that you’re looking for. Because one thing that’s very clear is that internet for education is very important. There’s a lot that’s happening on the internet in terms of education, you’ve seen your usual Google career certificates, you’ve seen the skills, you’ve seen all these platforms that are offering education. And just like now that I was talking about Asia, even like what, if you followed the SDG conversation that was happening at UNGA, there was the example of the Khan Academy and what that impact has been like. However, we can’t get there if we’re not looking at, number one, connectivity, number two, low-cost devices. And then back to the point of becoming digital ready in terms of the future skills, are we also looking at curriculum integration? What digital subjects, skills, learnings are being put into the, what we’d call the traditional quote-unquote curriculum that’s happening. And this is a full multi-stakeholder approach because what you find as well is that even the educators do not have the capacity to offer some of these digital skills that we are seeing, you require them to be future ready for the future that we are going into. I’ll give you an example. We’ve been hearing all these stories about plagiarism and how generative AI tools work. And the question is, we are now moving into an era where it’s going to be more about critical analysis because what we are saying is that we are bringing in artificial intelligence, we are bringing in all these tools which are extremely of positive impact with the right navigation. Are we also equipping ourselves and are we also equipping our governments, our legislators, our policy makers with the right information to ensure that we are building a digital-ready future for our education systems where we’re now moving from heavily regurgitation of content to more critical analysis and more thinking, allowing students to think and to create in a world that is moving so much across thinking and critical analysis. Yeah, so also to my last point, it would be we are now moving into a space where we are going to require a lot more digital libraries. Previously, it was about having books, but how can we access the books with the kind of generation that we are bringing in? We want to bring in a situation where there’s cross-border collaboration even when it comes to education so that we have a situation where skills can be cross-exchanged, there can be a lot of collaboration across those who’ve developed a bit more and those who are still looking to develop. So I think that’s also one of the points that governments as well as organizations should be looking into to ensure that we have a digital 3D and future for education. Thank you.

Ashirwa Chibatty:
Thank you. Yeah, though we are from diverse region as we’re humans, our basic needs are the same and for education, right to education is one of the basic need now, so is access to internet. So the challenges are same, but obviously we have to have localized context for that. So moving to our next speaker, Umut, they are online speakers, so can we get him on the screen, please?

Umut Pajaro Velasquez:
Hello.

Ashirwa Chibatty:
To you, Umut.

Umut Pajaro Velasquez:
Hello, how are you? Well, thank you for the question. I’m going to share some key points about the Latin America situation when it comes to digital education. Probably as some of my previous speakers already said, COVID-19 changed the situation here in Latin America when it comes to digital educations because we realize that actually we create a bigger gap when it comes to rural areas and also to marginalize communities that were living inside of the cities. So in order to manage that in a better way, some other governments came up with some kind of solution that I found pretty much in common in several governments in Latin America. And I’m based in Colombia for some of the solution that we’re starting to implement in Colombia are the same solution that is starting to be implemented in countries like Brazil, Argentina, Dominican Republic, Uruguay, and others in Latin America. So one of the many solution is access because in the way the governments are trying to do it is to promote, not only that the private sector get to the rural areas, but also creating community networks and I invest in organizations and people in the rural areas also create their own networks to be connected to the internet. So this way to expand the access not only to the social use of the internet, but also to a school and already location institutional institution, as I say, especially in rural areas and only in certain areas in the country. They try, they create laws, especially in most of the country that are trying to promote the partnership between public and private sector and so some kind of socialized race to promote that some private companies get to solve certain rural areas that is hard to reach and all the new domains as a mechanism so as community driving internet accessibility, for example, Mexico, Colombia, and Argentina and recently developed laws related to community driving internet accessibility where they have a special rate for this kind of connection and giving access when the use is mainly to education. The second one is they’re trying to get through affordable devices and connectivity and governance organizations and also working in providing affordable devices and connectivity to a student and teacher through several governmental programs and one of the things that we already said, we are also trying to implement ways to monitor how the resources are being used to get those devices because we had that problem and sometimes those resources are not being used to get those devices or to get internet to the schools or to the students. So, it’s not only to create programs that is school-based distribution and also task-based and another incentive for also monitoring those programs in order that we can give access through internet and access through the different devices to people especially in rural areas and in other areas of the country. Another aspect is that we are working right now a lot is trained teacher administration and administrators on digital tools and resources because we understand that after the pandemic of COVID-19 that most of the teacher wasn’t ready to face the digital spaces and how to teach using different technological resources. So, we understand that we need to train on how to use the tools and resources effectively in the classroom and we understand that this training should be tailored to a specific needs of the schools and the community because it’s not the same being the beach in a rural area or to indigenous communities or in a marginalized part of a city or in a private school than in other spaces. And finally, is develop digital literacy programs for students. Right now, some countries are working in changing the curriculum and create more digital literacy skills because we understand that these skills are needed in the current technological development. So, students can be actually be aware of how to use not only the tool for good but also for their daily lives. Some countries are developing school-based programs and others are working on after-school programs and also developing online resources so people can get, so students of every age can be, we can be a capacity builder for students. So, very much is like the four points that we are working here in Latin America the most. And so, we have other problems, but I think we share a lot in common with Asia Pacific and Africa. So, I don’t want to repeat what my colleagues already said.

Ashirwa Chibatty:
Thank you, Umut. I think that’s a very good start to our session and we can already find so much commonalities within our diversity as well, which is something that we can celebrate about always. So, okay, so we all agree that equitable access to digital education is needed, but it does come with other implications, future implications as well. So, my question next to the panel as well as the audience here is like, what policy measures can be implemented to enhance educators’ capacity and address the cybersecurity risk in digital education space across the region? And how can digital education empower youths with the necessary skills for evolving labor market? Because we know that the future workforce is going to be different. The third industrial revolution is already over. We’re into fourth industrial revolution. And the purpose of education would be to create a labor force that matches the requirements, the needs of the future market. So, I would start this time on the reverse order with Umut first. So, what necessary skills are needed for evolving labor market and how can we enhance educators’ capacity to do that also by making them secured and safe in the cyberspace? To you, Umut, again.

Umut Pajaro Velasquez:
Okay, well, I think one of the things that we can actually do is start to provide professional development opportunities not only for teacher and also develop digital education standards for teachers and students. That means to not only to improve in our curriculars things about cybersecurity and how to prepare ourselves online, but also provide to teacher, for example, digital pedagogy about cybersecurity best practice. And so, one of the things that should do our governance in when it comes to policies and probably will be in this in cybersecurity infrastructure because this could be especially in Tupperware School, especially Tupperware School, another educational institution for cyber attack. Here in Colombia, we received a couple of weeks ago a massive attack on probably the public sector and a lot of public universities were affected by it. So, this showed the importance that implementing strong firewalls, intrusion detection system and other security measures to protect the information inside of schools or educational institutions. And when it comes to probably our job and necessary skills for the development market in that means, I think that digital education program should focus on developing transferable skills that can be applied to various new jobs. This is the school include problem solving skills, capacity, critical thinking, communication and teamwork that probably those things that we are going to be using more in a technological landscape where AI is present, especially critical thinking because we’re going to rely a lot on that in the future, in our future market life. Also provide opportunities for inspiration learning. This means that not only the way we educate our students is mostly just in a regular classroom but also providing we opportunity to gain real experience to interchange apprentices and other program. And it would help to, I don’t know, to scale the skill and knowledge they need to see in the workforce. And finally, I would say that I’m actually more experienced person or some employers that actually can teach the necessary skills what is needed in the actual, in the current labor market, but also with the future or the different trends when it comes to digital changes that we are living.

Ashirwa Chibatty:
Thank you. So Valerie, from African’s perspective, from Africa’s perspective, so what policy measures have been implemented, what lacks and what can be done to enhance educators capacity, addressing cyber risk, as well as making sure that the knowledge that we’re now providing to our future genders and caters the need of the future economy and the future market?

Vallarie Wendy Yiega:
Thank you so much. I’ll definitely give you a Kenyan perspective as well, but also just recognizing that within this conversation that Kenya is quite ahead when it comes to legislation, when it comes to the technology space, which may not be the same case as most African countries. But the first thing is to map out and find out what the gaps are in the education system because as we start to talk about educator capacity, we need to understand what are the gaps in the educator capacity to begin with. Number one, we have two forms of workforce. So we have the educators who are much more senior in the profession, and then we have the educators who are coming in who are much more junior and who may find, quote unquote, more ease in understanding the technological tools. So it’s a question of how are we going to put in place an intergenerational co-creation capacity framework where you’re not only skilling the newcomers, but you’re also re-skilling those who are senior in the profession because you do not want a situation where you’re saying you want to create a future for the education, a future that we definitely are going to see more of technological tools being in play when it comes to education, and the more senior teachers or educators are not able to interact with these tools. So we are going to see a lot more of an intergenerational co-creation and being able to be okay with a mindset shift of where the future is taking us as opposed to holding on to the different educator roles that we have seen before. I also liked Umut’s point on infrastructure. Again, the situation in Kenya and largely in Africa as well is that the more urban areas have access to this digital infrastructure. Most likely, I’ll give an example of Kenya, you’ll find schools within Nairobi are already using the technology, already have computers, already set up in terms of internet connectivity as well, whereas in the rural areas, some of the schools, maybe there’s only a computer in a rural school, one computer that maybe is used by the teachers to illustrate. So again, it questions the issue of digital infrastructure because it’s very hard to understand a technological tool when you don’t have access to it regularly, using it and testing out what can be done in terms of digital scaling as well. So infrastructure is a big one, but we see more and more that governments are putting in resources and funds into creating an opportunity for more infrastructure to come into the country. But I think, again, it goes back to our role as the multi-stakeholder model in terms of what are the internet service providers doing? What are the private sector doing? What are the companies doing who are offering the services? Are the governments actively reaching out to these companies to partner with them to ensure that there’s digital infrastructure when it comes to what can be done differently to create a future-ready workforce? Again, the issue of cybersecurity, I like that as well because what we’ve seen, and recently our Office of Data Protection Commissioner rolled out a penalty notice with a fine, with one of the greatest fines sent out to a school. And this is because what had happened is that the school had used the picture of the children as a form of advertisement to advertise a new intake for a school. The question that lied therein, and I was asking myself when I saw this penalty notice is that, is the school aware of their obligations when it comes to data protection, when it comes to children, especially because children’s data is one of the most sensitive data that is classified out there. So the question is, the same way we would have privacy by design and by default, are we also putting in measures to ensure that we have cybersecurity by design and by default? And I think what I had mentioned earlier, the issue of firewalls to ensure there’s no intrusion, are our educators aware that these are some of the technological tools that are being used to ensure cybersecurity? Because I think what you would not like to see is a data breach in a school because that potentially means that there’s a lot of children data that can then be exposed. And we’ve seen the whole discussions around trafficking, around what that data can be used for. So that’s something also that we need to look into in terms of cybersecurity, not only awareness, but also understanding how some of these tools are used and how they can be presented to educators to then use them as well. The other thing is that, what resources are also being rolled out for the educator capacity? And are we streamlining it in a way that we can have it in the curriculum integration for the educators as well? Because we’re not going to have one or two workshops for educators on what the internet means, what technological tools we are now facing. Because one thing that is very clear is that artificial intelligence and technology is moving very fast. I think previously we didn’t have generative AI, now we have generative AI. And now that we have generative AI, you’ve seen chargeability rollout, you’ve seen bad rollout, you’ve seen being rollout. So clearly there’s more innovation that’s coming through than how are we going to ensure that even as we move to skilling and re-skilling education. that all this is kept in mind to ensure that we have a future-ready workforce as we move forward.

Ashirwa Chibatty:
Thank you. Thank you, Valerie. Very, very interesting and valid points, especially when you talk about the seniors and junior educators that are coming in the field. Young people are very much adapted to technology, but the senior professors might not be, which might create some kind of problems. But yeah, intelligence and solidarity, that’s why it’s very much important in this phase of human development. Also about the future that you said, it’s a very, very valid point and it’s also about not going to the future, but it’s us that we take this society to the future. So the future that we’re going to see is what we are doing now. And the cost of cyber security always has been a problem for cyber security by design, but yes, if we invest on cyber security now, then in the long term, the cost is very much effective. That’s something that governments in developing countries need to understand. And also about the rural-urban divide that you talked, I may very much echo with you because when I started my work, the world was going on dual group computer, but when I took up NTM2 computer to a village, that was very much what attracted us to build up a school. On that note, I’m going to move to Binod before I go to the audience now. So Binod, what are your views

Binod Basnath:
on this topic? Thank you, Ashwin. I think most part have already been covered by my previous speakers, Umut and Valerie. But even so, I’ll try to answer those questions in two folds. First, about policies needed for cyber security. I won’t go much with the policies as of now because for the least developed countries, I think digital literacy is very important. We have a digital literacy of around 31% and we desire to reach to 70% in a couple of years as per our digital framework Nepal 2019. But that’s a hard task. Yes, in terms of literacy rate, we’re moving forward very well, but the digital literacy part seems to be quite stagnant. And to solve those issues, I think we need to learn from the existing frameworks. Like Umut said, we need to make our own frameworks tailored to our custom needs. We can take ideas from the digital intelligence framework. We can use the IST, International Society for Teachers Education framework, and there are various other teachers competency frameworks. But I think my concern or my idea would be for least developed countries to design a diploma course for producing trainers on digital literacy. And those trainings could be taken by teachers, educators, and administrators as well. And once we can create those trainers, those trainers could be hired by CSOs or other organizations or government bodies as well to take those trainers to different marginalized communities and give them training. And it’s quite urgent now. It’s because I think over 70% of the households in Nepal already have a smartphone, and they’re already using social media platforms very intensively. And with no knowledge about cyber security, cyber hygiene, this could be disastrous. And my second point would be especially for the teachers, because if teachers are well-equipped and well-empowered, they can teach the students, and students can go back home to empower their parents as well. We need to devise standards and guidelines for digital pedagogy, online learning environment, learning resources, virtual assessment, digital citizenship, and for educational management and information system. We need to have our local standards, but we can get inspired from the ones that are already existing in the Western countries or developed countries. So that’s my first fold of my question, of my answer. But before I go to my second answer, I’d like to start with a small example, small story that I’d like to share. And then I’d like to get some feedback from the audience, and then I’ll go back to my answer. In Nepal, I’m also the director of Empowering Asia, which gives skills to students that prepare them for the future workforce. And I was talking to one of these students, and he asked me a very serious question that kept me pondering for a while. He said, here in Nepal, I’ve been studying for around 15 to 20 years, 12 years for my college degree, four years for my university. And then I go out to the economy, I go back to the job market, but I strive for getting a job. I just cannot get one, and the one I get pays me so less. But I just do a course for three months, and it’s a job designed by the Japanese government. And I get a certificate, and the Japanese companies are willing to hire me for around $2,000 per month in a specified skilled worker visa. That three months is so little, but they are paying me so much to get a job. And back home in my own country, I studied for 18, 19 years, and I barely get $300 per month job. Why is that? He asked me this question, and I had to give it a thought before I answered him. But before we go back to this answer again, I’d like to ask two questions to my audience. One question is for everybody. The first question is, what do you imagine the future workforce will be like? That’s one question for anyone to answer. Second question is especially for people from, if you are from a developing nation or least developed nation, why do you think we have an issue of employment in our countries? If anyone from audience would like to answer one of these questions, I’d like to give the floor to you. Yes, sir, please.

Audience:
Hello, everyone. This is Narayan Timilshana from Nepal. So it’s wonderful to see Nepalese guys here as a speaker. Regarding your question, what I want to highlight here is basically we have lots of problem in our teaching pedagogy. Basically, when we see that lots of students, they pass out from the universities, they are unable to find their job placement right quickly in the start-ups or other industries. So basically what we are missing is not only the curriculum, but the way the teacher provides their skills and the new technologies in their teaching methodology. So that’s why we are debating about providing some, what we call it, the finishing school concepts in Nepal. We have been debating there in Nepal, but the main problem what I want to share in this platform and find some other experiences from African continent or something like that. Basically, when you talk about digital literacy, digital skills, re-skilling, it’s not a tangible thing, it’s an intangible thing, and it takes a lot of time. So government and everyone does not are willing to invest on these things right away. They like some infrastructure investment and something like that. So it’s very challenging. So what is your thought and how we can cope up with these sort of things? That’s my question. Would you like to answer? Let’s take an answer from one more and then we’ll come back. So hello to all the speakers. I’m Luke, and as a youth I’d just like to add my opinion to this issue about digital education for the future and pose a question as well, which is AI readiness. So currently as a university student, a lot of my friends are wondering should I use maybe applications like ChatGBT to help me with my homework, and if I do use it, what are the best practices in place? Because it’s easy to say, oh, you should not do this, but you should do that. But I feel that, as you said, there should be a diploma in maybe teaching digital literacy. So my question is what are the maybe measurable actions that we as youth can take right now to make sure that we’re using it ethically and not doing any unethical work like copying or stuff like that? Thank you. Could you repeat the question? So basically what are the best practices that youth can take right now to implement AI into

Binod Basnath:
their education? Okay, so for the first part, let me finish mine and then I’m sure you can answer the questions as well. Well, thank you so much for the audience participation and the question you posted. Within my ideas that I’m going to share from here on, I’d like to answer the questions that have been posed, and if I’m not sufficient, please help me out after that. How many of us have actually heard of industry 4.0, if you could raise your hand? Or fourth industrial revolution? I think Africa actually is very ahead in this matter. There are programs being launched to prepare people for the future workforce in terms of industry 4.0. Let me come back to that. You know about the industrial revolutions, right? The first industrial revolution being more being governed by mechanical workforce, like steam engines and stuff like that. The second industrial revolution was more about electrical items, televisions, and all other electrical items. The third industrial revolution being governed much by technology, computers, and the internet. And now we’re moving towards the fourth industrial revolution, and that is going to be governed more by, as one of our audience has already spoken, it’s going to be governed by artificial intelligence, big data, machine learning, and blockchain technologies, and stuff like that. Robotics as well. So, is our economy ready for those kind of activities, and are we preparing workforce that match those things? It’s a very difficult question that we need to answer. In terms of our least developed countries, we’re still producing โ€“ it’s a harsh reality, but I think we need to talk about that. We’re still producing workforce that cater to the needs of second industrial revolution, so we are always behind playing a catch-up game with the developed countries. We are like the rear wheel of the bicycle, which never catches up with the front wheel. So, that is one issue, but I think if we talk about this today, and if we go back home and work out for this, I think we can prepare workforce that are ready for the future economy. And that starts, of course, with the school. We need to have a digital curriculum, digital pedagogy, a digital means of assessment system, and prepare โ€“ especially the vocational and technical schools have to prepare workforce that we need for the future. And without that, I think we will again get stranded. The situation we’re facing now is the same. In our nations, that’s why we don’t get jobs, because we’re not matching the skills that are needed for our economy at the moment. Along with that, I’d just like to leave a small thought for you. Please hear me out here, and give it a thought as well. Being unemployed and being unemployable are two different things, and I think the latter one is more severe. Thank you.

Ashirwa Chibatty:
You rightly said, Binod, and also about the AI and how Africa is moving ahead. I think Africa and union is also looking for an AI center of excellence, something where all African nations can benefit from that. So Valerie, what are your thoughts on the questions from the audience?

Vallarie Wendy Yiega:
Thank you so much. And yes, just to agree with Binod, Africa actually is moving into a space where we’re looking to see how we can use AI to move the continent forward. Again, just for the background, Africa is the continent that has the most number of young people. It is quite a youthful population, so definitely it’s looking into that. And we’ve also just seen what’s happening with the AI labs in Ghana as well. There’s a lot of work going around that. There’s also a lot of work being spearheaded by the African Union in terms of artificial intelligence, especially in Africa. And I like the question that one of the audience talked about in terms of re-skilling and scaling and how we can move it from being intangible to tangible. Unfortunately, we cannot skip on time and we cannot skip on resources. However, we do need to put in the time and the resources to ensure that we get us there. I’ll give an example. Recently in Kenya, our government has launched what we’re calling a housing levy tax. So this levy essentially should be able to assist the government to ensure there’s affordable housing for low-income earners. And though we are complaining about the tax, we are paying. So you find that you can’t skip on time when it comes to scaling and re-skilling of the workforce, and you also can’t skip on the patience that is required to ensure there’s that mindset shift to ensure that our educators are able to get to a point where they’re skilled enough to have this educator capacity and see more resources to move into the sector that streamlines the whole education sector to ensure that this is being done. Because at the end of the day, we are moving into a time where technology is moving very fast. If you’re not skilled or re-skilled in the technology space, in order to provide this value to the learners, the students, or the workforce, then over time you find yourself being redundant or not relevant to the workforce or to the education service that you provide. Yeah, I like the question on that Luke asked about the issue of using artificial intelligence and generative AI tools, whether that can be used at an education level. I’m a tech lawyer, so I’m very pro-technology, I’m very pro- innovation, I’m very pro the use of tools that can be able to have positive impact. But also, understanding and recognizing that these tools can also work to the detriment of the learners, in terms of we’ve had a lot of stories about cheating, a lot of plagiarism, but I think now we also need a mindset shift and an educational shift as to how tests are being done as well. Because we’re no longer going to look at content, because you could easily just put in a question in chat and it will give you an answer, and you could easily go with that answer. But also as educators, we now need a change as to how this will be done, because at the end of the day, AI is here with us, and it’s here to stay, and it’s here to create even more innovation as the days go by. So then, how can we change that to ensure that even as we test learners, you’re able to pick out critical analysis from a plain regurgitation of facts? Because the thing is, these tools are also for good, in that even if you were to ask AI to assist you to draft an email to your boss, you can already see that there’s some value being created from that email, but that does not replace the human effect. It does not replace the need for a person to apply themselves and to give context to whatever the generative AI tool is going to bring out. So definitely, a lot will be done in terms of critical analysis, especially for best practice when working with generative AI tools, but the truth is, they’re here to stay, and the best way forward would be to understand how to use them for good, how to use them in line with the ethical and responsible guidelines that are being formed around artificial intelligence, and ensuring there’s monitoring, there’s transparency and accountability when it comes to developers of artificial intelligence as well. Thank you.

Ashirwa Chibatty:
Thank you, Valerie. Also, about the question for Luke regarding ethics on AI, I think Umut is one of the experts on that, so Umut, your wise words, please.

Umut Pajaro Velasquez:
Okay. Well, the use of AI in education is something that is really close to me, because AI is one of the things that I work the most. And when it comes to education, I think mostly the youth already know how to use this tool for good, because they already know the limits when it comes to the use of this technology, for example, to complete different tasks in an education system. The problem here is they are not fully aware of the ethical implications or the legal implications of using these technologies in certain contexts. So, we need also to teach that kind of things, so they can use it for good and improve their productivity or the work that they’ve been doing. And also, another thing that we should focus here is to change some capacities, some more human capacities as critical thinking, because as I said before, that will be essential in this context of artificial intelligence, especially in education. We need to understand that the future professionals or the future of the labor force or the youth that is using these technological tools need to understand how they work, how they can use it for good, and also how they can use it to solve bills, not only use it because they want to get an answer fast or something like that. It’s just to use it to build from it.

Ashirwa Chibatty:
Thank you so much. So, moving on to the last part of our session, and if there is anything from the audience that you want to add, please do. So, if there’s anything, please. I think you’re ready for some questions here.

Audience:
So, hello. My name is Ivy, and I’m representing the Chinese YMCA of Hong Kong, and this discussion has been really informative and insightful, and it’s mostly talked about how governments can put out policies. So, I would like to ask, is there anything as individuals or as youth like myself, what we can do to also help with digital literacy?

Ashirwa Chibatty:
You would like to take the question.

Umut Pajaro Velasquez:
involved in the movement inside intergovernance, one of the many things that we came to conclusion during the years is that we shouldn’t be afraid to use our voice to say what we wanted to say in terms of what kind of technologies we want to have, and we know that some governments probably are not so open and not seeking to hear their voices, their young voices, but we when we have the space, which should address all the issues that we actually consider that should be addressed inside of those spaces. We have right now here, for example, the Internet Governance Ecosystem that actually opened the doors to many people from around the world to say exactly what they wanted to say about the Internet they want. So that is an opportunity that we shouldn’t take for granted and we shouldn’t appreciate and talk about exactly what we want. Another aspect that I like to report in this being on your participation and your incident into policy is try to find those places in your countries because there are spaces in your country, even the more closed ones. There are spaces where you can do incidents and you can participate in the construction of the policies that are being developed in your country. And I would say, don’t be afraid to share what your knowledge or what you’ve seen because probably it’s important to build those policies.

Binod Basnath:
Yes, I think that was very right, as Umut said. As youth, I also think you are a very valuable part of the country and you coming here in IGF in itself is already a good start. So you could, as a youth, you could join in into more events, more IGF forums, regional forums, your country-wide forums, and you could do advocacy for your government on those areas which you feel that need to be changed. And you can take the competency you have back to your home, back to your community, and you can also empower and invite more youths to join in. That way, I think we could synergize and have more empowering youths with digital competency and literacy. Thank you.

Ashirwa Chibatty:
Valerie, you are the youth ambassador. You’ve done a lot since you were younger, so please, enlighten us.

Vallarie Wendy Yiega:
Thank you, yes. That’s a question that’s very close to my heart. Again, I coordinate the Kenya Youth IGF, so I understand your question and where you’re coming from. And just like Binod said, I think it’s very important that you’re in this forum right now. That really shows that you’re on the steps to understanding what happens in the internet governance ecosystem. But what I’d say is that there’s a lot of self-education when it comes to this space. So there’s a lot of you trying to actively learn how to engage in this space. And I know the Internet Society has excellent courses that it offers online on how you can engage. And I also know that you’re able to join the relevant youth organizations that you have. I know here in Asia, you have an organization that covers around the Asia region quite well. It’s something that Jenna’s team does. I forget the name, but I’ll get you the name soon after this meeting. But in that organization, you’ll find a lot of young people across the Asia region. And I’ve found that they are very powerful in that there’s a lot of digital literacy that happens within that organization. There’s a lot of advocacy as well. There’s a lot of community building that happens within that organization as well because I’ve seen, I’ve been following the organization for quite a number of years. And I know they’re very forward thinking when it comes to equipping young people with the skills to navigate the internet space and to navigate it effectively. Also just one thing that has also helped me being in this space, having been in this space for about five years now is understanding the specific challenges and opportunities that you have from your own home country. Because yes, we do have a lot of best practices in place, but what also helps is once you understand your context and you’re very clear on what you’re trying to achieve and what change stroke impact that you want to achieve as well. So especially here where they already set organizations, my two cents would be to join those organizations and be able to speak to them. I know the team from Asia Pacific within this Internet Governance Forum, I’ve seen the entire team last night. I think they can also help you to navigate this as well. I think also Ananda can help you navigate this space. He’s just laughing there, but he’s also part of the youth IGF team. So I think he could also be a good start as to how you’re able to navigate this space and with the right persistence and when you keep keeping on in this space, you’ll be able to see that your understanding of digital literacy and your understanding of internet governance will keep growing over the years. Yeah, thank you.

Ashirwa Chibatty:
Yeah, I think the generous team you’re talking about is the Net Mission Academy? Yes, so there’s this Net Mission Academy. I think it’s based in Hong Kong, supported by .asia. So yeah, you can connect to Jenna or Jennifer for that. And if you want, I think you know them. If you don’t know, I can help you connect with them as well. So do not hesitate to do that. I would very much love to speak more and love to hear from you, but we’re nearly in the end of time. We just have last 15 minutes. So before we kind of wrap up, let me also take this opportunity to thank Ananda, who is here helping us take the notes and keep all the things in place so that this discussion and conversation can continue. And also, as he’s in Nepal Youth IGF, very active in this space, I hand it over to Ananda to just tell what you’ve noted.

Ananda:
Thank you so much. Thank you so much for keeping me here. So hello, everyone. It was a nice, insightful discussion today. And like we are discussing about a very important issue. And while today we went through different case studies from Asia to Africa to Latin America, and like what we have witnessed with the, we talk about industry 4.0, a massive development in AI, machine learning, and those are the hot topics of the whole IGF itself. But what we have to also understand that is there’s a big digital gap. There are still people who are unconnected from the internet, who doesn’t have access to the internet because of different barriers. It might be affordability, it might be accessibility. So today we have discussed many things, but in the age of industry 4.0, how do we actually blend technology with education and actually providing these kind of skills to students so that they are ready, industry ready when they are graduating is the most challenging issue of today. And then like we also talked about the re-skilling of educators, contextualization of technology in local context, which is very important, but as I think Valerie mentioned about the universal service fund, there are those kind of service funds which are allocated for developing technology. In case of Nepal, there is a Rural Technology Development Fund which can be used and which shall be used to actually make internet more affordable, inclusive, and secure. And there is a role of, we talk about multi-stakeholder engagement in IGF. So there’s a role of everybody in this process. Government make policies and civil society need to support them with the monitoring and the accountability part and private sector will be supporting. And then we can actually build the ecosystem which creates the students that are ready for the industry that can land the global job landscape. So, and I think Umut also mentioned about the community networks. So if you guys are not aware about community networks, community networks are actually the networks that are owned and managed by community themselves where there is no connectivity. People, last mile connectivity cannot reach at some point and people accessing various funds, they can build their own community network using affordable technologies. I think Africa has so much of example on that and then like ISOC and APC kind of organization are working hard to actually build a community network across the world. In Nepal, there was few, but I think they are not much active these days. But like we have to work on those things so that the people who doesn’t have affordable devices or who doesn’t have access to internet and not any device that could actually connect to the internet. So for those people, we can create community learning centers where they can go learn these things. And we have also talked about some open courseware where content can be accessed online and offline. Khan Academy is one of such examples and I think there are many more. There’s a repo built by Rachel. Rachel Foundation is working on open courseware system where you can find trillions of gigs of information which can be used in rural technologies where there’s no access to internet. It is a local server based content management system. Maybe some community networks has already used that as well. So, tipping up my point, while policies are way behind in case of like developing nations, we have a huge responsibility and Valerie was asking me to share about how youth can contribute on that. So like what I always tell about youth is we are the biggest stakeholder of the internet today. And then with this role comes a bigger responsibility. How do we actually make this internet more inclusive? How do we help people to actually connecting the people that are not connected today? And then how do we create ecosystem that allows everyone to access the content in the internet? So, and the initiatives like civil society, initiatives like internet society, IGF itself, national IGF, regional IGF, and local IGF should actually work hard so that we can actually eliminate these things. So I’ll wrap up my things over here. Thank you so much for being here. I’ll give it back to Asif. Thank you.

Ashirwa Chibatty:
Thank you so much, Ananda. So this is the last call for any interaction from the floor. If there’s any solutions available, if there’s anything. I’ll take it in there.

Audience:
Hi, good morning. I’m Dean Dell from the Philippines. I would just like to ask some initiatives that you’re presently, if any of you are doing right now. For example, in the Philippines, we have more than 7,000 islands. And most of these islands are in remote locations and still doesn’t have any internet connection and even utilities. Okay, I would just like to ask, apart from just what you have said a while ago, any existing initiative or tools that you can recommend that would answer those underserved schools who that soon they will still be able to maximize the use or the advantage of AI and other digital technologies and contents. So that’s it.

Ashirwa Chibatty:
Thank you. Anybody would like to take it? And please know that we have a very limited time now. So please make it short.

Binod Basnath:
I’ll try to be as quick as possible. Thank you for the question. During 2017, we did a pilot project with movable and deployable ICT resource unit. It was a network for the community for places where there were no internet connectivity. This device would create an internal networking system for the whole community. And it would be a community owned network. And with those network, people could share information through voice calls, through video calls, through message system, sharing of photos. But we used it for education. And it was very effective. And we reported that to entity, which was further reported to ITUD. One of the study groups has published the effectiveness of how community led network and devices can be effective in places where there are no internet facilities. And taking this one step forward, we are trying to pilot with locally accessible cloud system, LAC system, that I think has been implemented in Philippines as well. I think you’ve been using it mostly for disaster, but we want it to be used for education and health sector for marginalized and backward communities of Nepal. Thank you.

Ashirwa Chibatty:
Thank you, Vinod.

Ananda:
So talking about, I think Vinodji has covered a lot. So actually, when it comes to open source repositories, there are many. Like Rachel is one of the, I think I have found the most, R-A-C-H-E-L is its spelling. And then like in Rachel, you get Khan Academy integrated over there. And there are a lot of like open source learning resources, which are updated periodically, which can be downloaded on any computer. You can make a local server and then like broadcast it to the network that can be accessible without internet as well. It is not internet based. And when you have connection, you can update the content. And there is another initiative called Colibri. And Colibri is also integrated in Rachel as well, but Colibri is more actually on like user end. You have a content, you downloaded it, and then you can actually transfer it to another person’s phone without internet access. That is the power of Colibri. It’s like it uses peer-to-peer networking technology. So if I have that repo, I think what content I have here, I can transfer it to another person without internet. And then like that is they are doing. And Colibri is integrated with Rachel as well. And inside Rachel, you can find Khan Academy, every content you have ever imagined, and it is updated regularly. So I think if you want more resources, maybe we can discuss or like set up a call, and then we can talk about it. I think Valerie knows more about it as well, because many community networks share the same principle. I deployed one back in COVID, and I said, well, then myself are trying to upgrade that. It is not operational right now, but we are planning to upgrade that. We have gone through feasibility study, and we are looking for funds. So like those kind of technology are there. We can discuss more because of the time limit. There’s like red sign coming up. So I think we should wrap up, but we can discuss it offline as well. Thank you so much for being here.

Ashirwa Chibatty:
So we have last five minutes, as this gentleman showed me. So I give our speakers one minute for the closing remarks, and shorter is better. So we’ll start with Umut online,

Umut Pajaro Velasquez:
Valerie, then Binod before I close. Okay, well, that will be, I will be to remind that digital education and digital literacy are in development right now, and in constant changing right now, because technology is constantly changing, and society is constantly in constant change. So we need to aware that the future of education is a work that is done day by day, especially in spaces in the global south, where all societies still need access, still need equipment, and still need infrastructure. That will be it.

Ashirwa Chibatty:
Thank you, Valerie.

Vallarie Wendy Yiega:
Thank you so much. I think for me, my mantra has always been, each one teach one. That means that just like the member of the audience had said earlier, it’s up to us to ensure that we carry together into the future, this generation of digitally skilled learners as well. So what that means is that, how can we contribute? Is it through policymaking? Is it through building innovative solutions? Is it through putting our voices towards ensuring that we have a future ready digital skilled education system? So for me, it’s always paying it forward and rolling out the information that is required by the stakeholders on the ground. Thank you.

Binod Basnath:
So my last words, I’d like to urge the policymakers, actually, for the Asian countries, especially the South Asian countries. We all had our ICT in education master plan one, and I think most of the countries have completed that, and we’re moving towards the second master plan. But I don’t think there’s much awareness about ICT in education master plan amongst most of the stakeholders. We don’t even know what the master plan is and what we’re trying to achieve. So I think post-COVID, we have learned that ICT way of learning can be more inclusive and accessible if it’s implemented correctly. So I think we need to raise more awareness. We need to map our resources. We need to have realistic plans and not just have plans for the sake of having a plan. And I think post-COVID, ICT master plan two will be a very effective tool for us to reach education 2030 goals. Thank you very much.

Ashirwa Chibatty:
So thank you, Binod, Valerie, Ananda, and Umut for your insights and sharing your expertise. So when we talk about internet or any technology, it’s free from prejudice or harm or anything. But how we go in it decides what is used. That’s why the multistakeholder model is very much important. And that being said, I’d like to go back to the human aspect of technology. How do we get additional resilience in digital education? There are technical aspects, but yeah, we have to be inclusive from design. We have to accept diversity and practice empathy. We have to share each other’s experience. We don’t duplicate things. We have to learn from each other and we have to work as a community for the better shared future. So let’s think about our children and their children’s children when we make any kind of decision. Thank you so much, everyone. I’d like to take this opportunity to thank my SIG leadership team, Shraddha, Samuel, Maxwell, and everybody who is not here, but they have been supporting us for our past two years. So thank you, everybody. And please do join our Internet Society’s special interest group on education. There’s a QR code. If you can scan it, you can join us to connect and let’s move towards a global internet that ensures inclusive, equitable, and quality education, promoting lifelong learning for all. Thank you so much, everyone, for your presence and also the ones that are online. Suara and I see you there. So thank you so much for being there online. Thank you, everyone. Close the session.

Audience:
So I think I’ve had a good photo, yeah. Photo’s always good. So those who are present until last, if we could just take a photo for our memory, that would be great. We could do it right outside the hall because there might be next session here.

Ananda

Speech speed

164 words per minute

Speech length

1232 words

Speech time

450 secs

Ashirwa Chibatty

Speech speed

167 words per minute

Speech length

2986 words

Speech time

1071 secs

Audience

Speech speed

157 words per minute

Speech length

687 words

Speech time

263 secs

Binod Basnath

Speech speed

147 words per minute

Speech length

2882 words

Speech time

1173 secs

Umut Pajaro Velasquez

Speech speed

140 words per minute

Speech length

1910 words

Speech time

817 secs

Vallarie Wendy Yiega

Speech speed

199 words per minute

Speech length

3902 words

Speech time

1177 secs

Green and digital transitions: towards a sustainable future | IGF 2023 WS #147

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Lazaros

During the discussion, the speakers emphasized the significance of supporting repositories in South Africa and collaborating with various institutions such as universities, research councils, national facilities, and museums to promote open access. They recognized the need for effective coordination and cooperation to ensure the success of this strategy.

One of the key points raised was the importance of training librarians to index and categorize content that falls within the criteria of the Sustainable Development Goals (SDGs). This would enable easy access and retrieval of valuable information and research related to these goals. The speakers also highlighted the need to link this content with existing repositories to maximize its visibility and impact.

Moreover, the speakers discussed the use of DSPACE software by a majority of the organized universities in South Africa. By adopting this software, universities can effectively manage their digital collections and make them accessible to a wider audience. They stressed the benefits of using a widely accepted and trusted platform for the efficient dissemination of knowledge.

Furthermore, the development of the South African SDG app was discussed as a means to gather collections within universities. This app serves as a convenient tool to gather and showcase research and information specifically aligned with the SDGs. It provides a platform for researchers and institutions to contribute towards achieving the goals set by the SDGs and promotes open access to this valuable knowledge.

Overall, the speakers had a positive outlook on leveraging library experts and adhering to international best practices for open access in South Africa. They recognized that by working collaboratively and adopting established practices, they can enhance the visibility and impact of research related to the SDGs. The emphasis on training librarians and the use of advanced software and technologies reflects a commitment to the efficient management and dissemination of knowledge.

Online moderator

The analysis reveals that Andrej Khrushchev has raised an intriguing question about the role of technology in supporting the green transition, particularly regarding security and energy efficiency. This inquiry suggests that technology has the potential to play a crucial role in achieving environmental sustainability goals.

It is noted that the global commodity value chain adds complexity to the task of implementing green technologies. This complex network involves the production, distribution, and consumption of commodities across the globe. Understanding and optimizing this intricate system is necessary to ensure that technology adoption does not inadvertently harm the environment or compromise security measures.

The sentiment of the argument is considered neutral, indicating an objective discussion that invites further exploration and analysis of the topic.

The related topics of the argument encompass the Green Transition, Technology, Security, and Energy Efficiency. These subjects are closely intertwined and interdependent, as advancements in technology can significantly impact the transition towards a more sustainable and secure future.

Furthermore, the argument aligns with Sustainable Development Goal 7 (Affordable and Clean Energy) and Sustainable Development Goal 13 (Climate Action). These global goals highlight the importance of renewable energy sources, energy efficiency improvements, and climate mitigation strategies. The question raised by Khrushchev emphasizes the role of technology in advancing these goals and promoting a sustainable future.

In conclusion, the analysis indicates that the question posed by Andrej Khrushchev emphasizes the potential of technology in supporting the green transition, especially regarding security and energy efficiency. Navigating the complexity of the global commodity value chain is crucial to ensure the responsible adoption of technology. The argument maintains a neutral stance, prompting further investigation and exploration. This topic aligns with Sustainable Development Goals 7 and 13, underscoring the significance of technology in achieving a more sustainable and secure future.

Audience

Tarek Hassan, the head of the Digital Transformation Centre Vietnam, is interested in understanding inter-ministerial collaboration in Japan, specifically regarding biodiversity. He wants to gain insights into how different ministries work together and the division of labour among them to effectively address green initiatives. Tarek believes that understanding the roles of these ministries will shed light on whether the digital experts lead the green initiatives or vice versa.

Collaboration between various ministries and levels of government is crucial for wildlife population control. The Ministry of Environment (MOE) has proposed and revised the Wildlife Protection Control and Hunting Management Act. Additionally, the Ministry of Agriculture, Forestry, and Fisheries (MAFF) is responsible for agriculture and the National Forest. Collaboration is necessary due to overlapping issues, ensuring successful outcomes.

In certain domains, like wildlife capture control, there is collaboration between the government and the prefectures. They work collaboratively on some aspects but independently on others. The country establishes basic guidelines and laws, while the prefectures handle practical program implementation. This two-tiered approach ensures shared responsibilities and effective governance.

Tarek is also interested in capacity building within ministries concerning digital transformation. He is curious whether the digital capacity is built within the ministries themselves or if it is outsourced. Additionally, he wonders about the role of the digital ministry within the governance structure.

The establishment of a digital ministry within the cabinet is a significant development. This agency primarily handles human number identification but is not heavily involved in the ICT techniques and technologies proposed by the private sector. Tarek is intrigued by the ICT techniques proposed by the private sector and their potential to contribute to achieving specific goals.

Tarek is curious about the quality of data used in the twin transition. However, no specific evidence or arguments were mentioned in the text. It remains unclear how the data quality could impact the twin transition, but it indicates Tarek’s interest in ensuring the use of reliable and accurate information.

Overall, Tarek’s pursuit of knowledge regarding inter-ministerial collaboration, division of labour, capacity building, and data quality reflects his commitment to understanding Japan’s approach to biodiversity and digital transformation. His goal is to gather insights that can inform his work at the Digital Transformation Centre Vietnam.

Daisy Selematsela

During the analysis, several key points were highlighted by the speakers. The first point emphasised the fact that African leaders have taken the initiative to set their own regional priorities in response to the Sustainable Development Goals (SDGs) agenda. This demonstrates the commitment of African countries to align the SDGs with their specific needs and challenges.

One specific example that was mentioned is South Africa, which invests 50% of its annual research and development budget in collaboration with international partners. This highlights the importance of international collaboration in achieving the SDGs, as South Africa recognises the value of leveraging external expertise and resources to drive progress.

Another interesting point discussed was the role of open access repositories in enhancing South Africa’s SDG hub. Open access repositories facilitate the sharing of information and make open source academic journals available to a wider audience. This is crucial in effectively addressing the SDGs, as it promotes knowledge sharing, collaboration, and innovation.

The analysis also highlighted the significance of knowledge management in relation to the SDGs, particularly in terms of availability, accessibility, acceptability, and adaptability. The effective management of knowledge plays a critical role in achieving the SDGs, as it ensures that the necessary information and resources are readily accessible to those working towards these goals. Furthermore, it was argued that relevant role players, including researchers, policymakers, and citizen scientists, are essential in solving global health problems. This highlights the need for multi-stakeholder involvement and collaboration to tackle complex challenges.

Additionally, the government’s strategies and international collaborations were recognized as crucial factors in supporting the SDGs in South Africa. With 50% of its research and development investment coming from international partners, South Africa acknowledges the importance of working together to achieve these goals. Furthermore, the existence of a Draft Open Science Policy in South Africa demonstrates the government’s commitment to fostering an environment conducive to open science and collaboration.

Overall, the analysis emphasised the importance of African leaders setting their own priorities within the SDGs agenda. It also highlighted the critical role of open access repositories, knowledge management, relevant role players, government strategies, and international collaborations in achieving the SDGs in South Africa. These findings provide valuable insights and recommendations for policymakers, researchers, and various stakeholders involved in driving sustainable development.

Horst Kremers

The analysis highlights the increasing complexity in managing data with the rise of urban digital twins. One of the key challenges identified is the lack of an international standard for the ontology of urban digital twins. This lack of standardisation makes it difficult to compare existing ontologies automatically. In order to ensure coherence and conformity to legal, financial, and ethical boundaries, challenges in coherence analysis need to be addressed.

Furthermore, the analysis emphasises the need for novel mechanisms and models to handle the complexity associated with urban digital twins. The emergence of more sophisticated digital representations of the urban sphere, known as digital twins, has led to the generation of massive data and active data streams from various sensors across cities. This has posed new challenges in data management. The sentiment expressed in this regard is one of concern, as managing the increasing complexity of data becomes a daunting task.

Another aspect that requires urgent attention is the implementation of just-in-time demands in managing digital twin logistics. Prompt implementation is necessary to ensure efficient management of digital logistics, and it is suggested that staging emergency drills and recording action plans will aid in meeting these demands. The sentiment expressed here is one of urgency, highlighting the importance of timely and effective implementation.

Regarding the handling of big data and complex data, it is noted that administrators are not well equipped in this area. The lack of educational resources and training inhibits their ability to effectively handle such data. This is seen as a negative impact, as there is a clear need for administrators to adapt and acquire the necessary skills to navigate the complexities of big data.

In terms of governance, a framework is deemed essential to operationalise long-term systems for the service of citizens. There is a positive sentiment towards the establishment of governance structures that ensure the smooth operation and maintenance of these systems. Additionally, there is an emphasis on the need for participative governance, involving not only the government but also citizens. The involvement of multiple actors is seen as crucial in ensuring a democratic and inclusive decision-making process.

The complexity of the global commodity value chain is acknowledged, and it is argued that a holistic green transition is necessary to address this complexity. This transition should encompass various topics such as food security and energy efficiency. The sentiment expressed here is positive, as the analysis recognises the importance of different professions joining together to guide the green transition. However, joining these ontologies presents a challenge, as it requires careful consideration of the purposes and consequences of data application.

Overall, the analysis sheds light on the complex nature of managing data in the context of urban digital twins. It emphasises the need for standardisation, novel mechanisms, and effective governance frameworks. Additionally, it highlights the urgency of implementing just-in-time demands and the importance of equipping administrators with the necessary skills to handle big data. The analysis also emphasises the importance of a holistic approach to address the complexity of the global commodity value chain.

Ricardo Israel Robles Pelayo

The speakers in the analysis highlighted several key points regarding sustainable development and the role of technology and collaboration in achieving sustainability goals.

One of the main points emphasized was the potential of big data and artificial intelligence (AI) in enhancing the efficiency and reliability of renewable energy sources. Through the analysis of big data and the implementation of autonomous decision-making, AI can revolutionize the generation and management of renewable energy. This can contribute significantly to SDG 7: Affordable and Clean Energy and SDG 13: Climate Action. The speakers provided supporting facts that demonstrated how AI and big data can improve the efficiency of renewable energy sources such as solar and wind.

Another important aspect raised in the analysis was the harmonisation of regulation and policies around digital technology and environmental sustainability. The speakers argued that this harmonisation is crucial and presents a significant challenge. They stated that it is important to consider specific technological aspects that have applications in the environment. By aligning regulations and policies, authorities can foster an environment that promotes the use of digital technology for sustainable development. This alignment may contribute to SDG 13: Climate Action and SDG 17: Partnerships for the Goals.

Collaboration between authorities and various stakeholders was emphasised as vital for achieving sustainability goals. The speakers stressed that authorities should work closely with private business corporations, civil society, and academia at both national and international levels. This collaboration is necessary to address the challenges and complexities associated with sustainable development. They argued that by involving multiple stakeholders, authorities can ensure more effective and comprehensive efforts towards achieving sustainability goals. This close collaboration aligns with SDG 17: Partnerships for the Goals.

The analysis also highlighted upcoming challenges in the pursuit of sustainability. These challenges include the reduction of greenhouse gas emissions, ensuring social justice, and promoting clean technologies. The speakers emphasised that more clean technologies and sustainable practices need to be adopted to combat climate change. Additionally, they highlighted the importance of ensuring social justice in the transition, particularly through training and skills development. By addressing these challenges, authorities can make significant progress towards achieving SDG 10: Reduced Inequalities and SDG 13: Climate Action.

Furthermore, the analysis suggests that authorities should actively participate in international forums such as the Internet Governance Forum (IGF). The speakers acknowledged that Mexican parliamentarians attended the IGF in Kyoto, highlighting the significance of active involvement in these forums. By participating in international forums, authorities can have a voice in shaping global policies and development strategies, aligning with SDG 17: Partnerships for the Goals.

Lastly, the creation and promotion of laws were emphasised as important for achieving digital and green transitions for sustainable development. The speakers argued that laws play a crucial role in driving the adoption and implementation of these transitions. They emphasised the need for laws to incentivise and regulate sustainable development practices. By creating and promoting such laws, authorities can facilitate the transition to more sustainable and environmentally friendly practices.

Overall, the analysis underscores the significance of big data, AI, collaboration, regulation, and laws in achieving sustainable development goals. The adoption of these technologies, collaboration between stakeholders, harmonisation of policies, and the creation of supportive laws are all essential for advancing sustainability efforts and addressing various challenges. By focusing on these aspects, authorities can pave the way for a more sustainable future.

Tomoko Doko

Wildlife management is crucial for the sustainable future of Japan, particularly due to the significant crop and forest damages caused by shika deer and wild boars. In 2015, the Wildlife Protection Act was revised to reflect the country’s commitment to preserving its flora and fauna. This demonstrates a positive sentiment towards wildlife management.

Furthermore, the implementation of ICT technologies has proven effective in monitoring wildlife. Drones are used to track habitats, while sensor systems help identify different animal species. These technological advancements provide accurate data and facilitate better management strategies.

The Japanese government has also introduced a certification system for wildlife capture programs. This initiative aims to counter the decline in hunters and reduce the population of shika deer and wild boars. The government’s goal is to reduce the population to half of 2011 levels, and progress has been made in achieving this target.

However, collaboration between stakeholders in wildlife management is lacking. Government officials, scientists, and private sectors often fail to work together effectively, hampering progress. To address this, bridging individuals or organizations are necessary to encourage cooperation and align goals.

The collaboration between the Ministry of Environment (MOE) and the Ministry of Agriculture, Forestry, and Fisheries (MAFF) is vital for managing wildlife populations. The MOE and MAFF have proposed and revised the Wildlife Protection Control and Hunting Management Act, setting common goals and creating guidelines at the country level. Prefectures are responsible for implementing practical programs.

Collaboration between the government and private sectors is essential for effective wildlife management. While the MOE and MAFF provide high-level goals, private sectors play a key role in implementing programs in collaboration with prefectures.

The establishment of the digital ministry agency primarily focuses on human identification numbers rather than ICT implementation. This highlights the need for collaboration between the government and private sectors to effectively implement ICT systems.

In conclusion, wildlife management is vital for Japan’s sustainable future. The revision of the Wildlife Protection Act, the use of ICT technologies, and the certification system for wildlife capture programs all contribute to positive efforts. However, improving collaboration among stakeholders is crucial. Bridging individuals or organizations can facilitate cooperation, ensuring successful wildlife management and a sustainable future for Japan.

Liu Chuang

Less than half of the Sustainable Development Goals (SDGs) have been achieved, and progress has been hindered by natural disasters, climate change, and the COVID-19 pandemic, particularly in small islands, mountainous areas, and critical ecosystems. These challenges have greatly impacted the advancement of the SDGs, which aim to address issues such as climate action, good health and well-being, and life on land.

To accelerate progress towards the SDGs, it is proposed that open science be embraced. Open science involves the use of big data, the Internet of Things (IoT), and the inclusion of various fields such as engineering. By adopting these approaches, while ensuring systemic management and cultural diversity, it is believed that progress towards the SDGs can be accelerated. Different organizations have their own ways of handling the SDGs, and a wide-ranging, reciprocal cooperation is being proposed among all partners to drive advancements.

In an effort to ensure trackable and high-quality agricultural products, China has launched the Global Institute for Environmental Science (GIES) and the World Data Center. The GIES operates as a decadal programme from 2021 to 2030, and it has initiated 17 different cases in various regions of China in the past two years. This initiative aims to support the SDGs related to zero hunger and responsible consumption and production. By establishing infrastructure like the World Data Center, China is taking steps to ensure the traceability and quality of agricultural products.

The GIES project has yielded several benefits, including increased income for farmers, high-quality products for consumers, and credit for contributors. Over 600,000 local farmers have already benefited from the project, and the quality of the products can be traced to ensure consumer satisfaction. This demonstrates the positive impact that initiatives like GIES can have on achieving SDGs, particularly those related to poverty reduction and decent work and economic growth.

It is important to pay more attention to underprivileged individuals and developing nations, especially those in mountain areas, small islands, and rural villages. These demographics are highly vulnerable and in need of assistance. By utilising technology, science, and commercial sectors to provide aid, it becomes possible to empower and uplift these underprivileged communities. The role that these sectors can play in addressing SDGs related to poverty reduction and reduced inequalities is stressed.

Identifying trustable data for research and business purposes is a challenging task since data comes from various sources, including government, private sectors, and university research sectors. Different policies define how data is opened for use, making it essential to establish standards for data quality and reliability.

The World Data System, which consists of 86 world data centres, provides peer-reviewed data to address this challenge. This global collaboration, under the International Science Council, ensures that data undergoes checks for data security, data quality, and authorship. By providing peer-reviewed data, the World Data System supports the SDGs related to industry, innovation, and infrastructure, as well as partnerships for the goals.

To ensure the accuracy and reliability of data, it is recommended to adopt meticulous processes for data validation and curation. This involves reviewing data quality with the help of experts and capturing information about the data source and production method. By implementing such practices, it becomes possible to address challenges related to data quality and trustworthiness, thus advancing the goals of SDG 9: Industry, Innovation and Infrastructure.

In the realm of data handling, protecting the original authors and ensuring data security are of paramount importance. Proper data handling protocols are observed to uphold privacy and security. Adhering to these protocols allows for the responsible use of data and preserves the rights of authors, aligning with SDG 16: Peace, Justice and Strong Institutions.

In conclusion, the achievement of the Sustainable Development Goals (SDGs) by 2030 requires significant progress, as less than half of the goals have been achieved to date. Natural disasters, climate change, and the COVID-19 pandemic have impeded progress, particularly in vulnerable regions. However, embracing open science, leveraging technology and collaboration, and ensuring the quality and reliability of data are potential pathways to accelerate progress towards the SDGs. Initiatives like the Global Institute for Environmental Science (GIES) and the World Data System in China demonstrate the commitment to ensuring high-quality agricultural products and traceable data. By prioritising underprivileged individuals and developing nations and utilising technology and scientific advancements, it is possible to provide aid and address inequality. The challenges of identifying trustworthy data can be met through meticulous processes of validation and curation, while upholding data protection and security protocols. Overall, a multi-faceted approach is needed to achieve the SDGs and create a sustainable and equitable future.

KE GONG

In this analysis, several key points are highlighted regarding the importance of sustainability and digitalization, the urgency to rescue the sustainable development goals (SDGs), the significance of using digital technology to implement and rescue the SDGs, and the importance of interdisciplinary, intersectoral, and international cooperation.

The first point emphasizes the dual transitions of sustainability and digitalization as crucial for the future of humankind. It is stated that these transitions are a historical process with great significance. Furthermore, it is asserted that digitalization serves as an essential tool in achieving sustainability.

The second point focuses on the urgency to rescue the SDGs. It is revealed that over 30% of the SDG targets have not made any progress or have regressed below the baseline established in 2015. This lack of progress is exemplified through the projection that 575 million people will still be in extreme poverty by 2030. These facts illustrate the need for immediate action to address and advance the SDGs.

The third point highlights the importance of using digital technology to implement and rescue the SDGs. It is highlighted that digital transformation is crucial in three specific areas: addressing hunger, transitioning to renewable energy, and leveraging digital transformation opportunities. Examples are provided to support this argument, such as the use of big data in smart manufacturing, urban planning, and climate action. These examples demonstrate the potential of digital technology in achieving the SDGs.

The fourth and final point underscores the significance of interdisciplinary, intersectoral, and international cooperation. It is emphasized that digital technology should work across disciplines without any borders. Platforms such as the China Association of Science and Technology (CAST) and The World Federation of Engineering Organizations are presented as facilitators of collaborations in this regard. The importance of such cooperation is highlighted as essential for successful digital transformations.

In conclusion, the expanded summary reiterates the key points outlined in the analysis. It emphasizes the importance of the dual transitions of sustainability and digitalization, the urgent need to rescue the SDGs, the significance of using digital technology to implement and rescue the SDGs, and the importance of interdisciplinary, intersectoral, and international cooperation. Through these points, it is evident that sustainability, digitalization, and collaboration are all crucial elements in advancing global goals and ensuring a better future for humankind.

Xiaofeng Tao

The workshop commenced with a series of presentations from six speakers, each focusing on different aspects of the green and digital transition. Professor Liu, the director of global change research, led the session by emphasising the significance of open science in driving sustainable development. She highlighted the need for transparent and collaborative research practices to address urgent environmental challenges.

Following Professor Liu’s presentation, Ms. Tomoko Doko, the President and CEO of Leisure and Science Consulting Limited Company, discussed wildlife management in Japan for a sustainable future. She showcased innovative approaches taken in Japan to protect and conserve biodiversity. Ms. Doko stressed the importance of a holistic and integrated approach involving all stakeholders to ensure effective wildlife management.

Mr. Kremers from Codata, Germany, then shared insights on the practical implementation of digital twins. He explored the role of digital twins in managing complexity, including process models and workflow standards. Mr. Kremers highlighted how digital twins enhance decision-making processes and optimise resource allocation in various sectors.

Next, Professor Ricardo from Mexico presented the challenges and commitments in digital technology and a sustainable environment as outlined in the United States-Mexico-Canada agreement. He emphasised the importance of aligning digital innovation with sustainable development goals and highlighted potential benefits and risks associated with the digital transition.

After the presentations, a discussion session provided an opportunity for participants to ask questions and provide comments to the speakers. The workshop facilitator posed three key questions focusing on government issues, stakeholder cooperation, and policy frameworks. Each speaker addressed one or more of these questions.

Professor Liu shared insights on the key challenges faced by governments in driving sustainable development, emphasising the role of political will and effective governance structures. Professor Ricardo stressed the need for enhanced collaboration and partnership among multiple stakeholders to address complex environmental issues. Ms. Tomoko focused on the role of policy frameworks in guiding wildlife management strategies and the importance of regulations for their effective implementation. Mr. Kremers discussed the potential of policy guidelines in promoting the adoption of digital twins and ensuring their compatibility across different sectors.

The workshop concluded with expressions of gratitude to the speakers, on-site and online participants, and the organisers. The facilitator acknowledged the thought-provoking presentations and insightful questions from the participants. Time constraints prevented a detailed discussion of all topics, highlighting the need for future collaborations and continued efforts to achieve a sustainable future.

Attendees were invited to gather for a group photo, fostering connections and setting the stage for potential future engagements. The facilitator expressed special appreciation to Professor Liu Chuang for their ongoing partnership and work in this field. Overall, the workshop provided a valuable platform for knowledge sharing and networking among experts, contributing to ongoing discussions on green and digital transition.

Session transcript

KE GONG:
Thank you. Thank you, Professor Hao. Now I share my screen with all of you. My title is The Three Musts for Accelerating the Sustainable and Digital Digital Transformations. Because the theme of our workshop is green and digital transformation towards a sustainable future. I’m very pleased to be part of this workshop because this is really important. Talking about the dual transformation, my understanding is that these dual transitions are a historical process which is crucial to the future of humankind. This dual transition, I think the goal is to achieve the sustainable development for humankind and the planet. That’s a value-pulling transition. It is the digitalization which is a very important tool for us to achieve the sustainable development. This transition, I consider, is a technique-driven transition. These two transitions are interacting with each other. They are not just parallel two transitions. They are interactive. The digitalization is a very important tool for us to achieve sustainability. At first, I would like to talk about the urgency of the dual transformation, especially the urgency to rescue the sustainable development goals. All of us know that eight years ago, all world leaders gathered together in New York made a sustainable development agenda which is called Transforming Our World, the 2030 Agenda for Sustainable Development. In this agenda, there are 17 sustainable development goals defined jointly by all member countries of the United Nations. Under these 17 goals, there are 169 targets. However, this year is the midpoint of the whole agenda. 2023 is exactly in the middle of the whole process. Last month, the world leaders gathered together again in New York to review the progress of the sustainable development agenda. However, at this middle point of the 2030 Agenda, the world leaders and the world people are shocked by the current progress. The latest global level data and assessments paint a concerning picture. This is the concerning picture. The blue ones are on track or the target rate. The yellow one is a fair progress but acceleration needed. The red one shows the stagnation or recession of those targets and goals. So this picture shows us only half of them show moderate or severe, not only half of them show moderate and severe deviations from the desired trajectory. More than 30% of these targets have no progress or even worse regression below the 2015 baseline, below than eight years ago. This assessment underscores the urgent need for intensified actions to ensure the SDGs stay on course and progress towards a sustainable future at all. In short, the SDGs need to be rescued. For example, just have a look to the goal one, no poverty in all its forms everywhere. So this figure shows you we not come back on track. There will be 575 million people will stay in extreme poverty by 2030. And here shows the current world vulnerable population remain uncovered by social protection. For example, for children, only 8.5% receive the social protection. For the elder people, only 23% can receive the social protection. That’s why the report, the Global Sustainable Development Report, GSDR, this report is every four years. And the newest report is titled The Times of Crisis, The Times Change. So we have to realize the urgency of this situation for sustainable development and take real actions to make changes. So that’s the second point I’d like to say. The second must is to take actions of using digital technology to implement to rescue the United Nations SDGs. Indeed, if we talk actions, people pay a lot of attention to the decentralization. Let me quote what Secretary General Guterres said in the UN Summit last month in New York. He emphasized the need to take actions in three key areas, including addressing hunger, transitioning to renewable energy, and leveraging digital transformation opportunities. Further, please allow me to quote some words from the Political Declaration of the United Nations Sustainable Development Summit last month in New York. It states, we acknowledge that important lessons were drawn from the COVID-19 pandemic on health, culture, education, science, technology, and innovation and digital transformation for sustainable development. It states, we will continue to take action to bridge the digital divides and spread the benefits of digitalization. We will expand participation of all countries, in particular developing countries, in the digital economy, including by enhancing their digital infrastructure connectivity, building their capacities and access to technological innovations through stronger partnerships and improving digital literacy. It states, we will leverage digital technology to expand the foundations on which to strengthen social protection systems. We commit to building capacities for inclusive participation in the digital economy and strong partnerships to bring technological innovations to all countries. So, digital transformation is stressed again and again in the Political Summit last month in New York. That shows the importance of digitalization as a lever to achieve the Sustainable Development Goals. So here, I just show you some examples. For example, for the electrification, digitalization, digital technology, Internet things play a very important role to achieve further electrification with renewable energies. That’s Goal 7. And for Goal 9, industry innovation and infrastructure. Here, I show you how digital big data has been used for smart manufacturing. And here is an example of a city, a famous city, Hangzhou, in China. Because Hangzhou city, in the center of Hangzhou city is a big, beautiful lake. We call it the West Lake. But that makes the traffic of this city very difficult. And this city was the top fourth traffic jam city in China. But with the help of big data and the implementation of so-called city brain was empowered by artificial intelligence and the fifth generation of mobile communications, 5G, this city has now become the 27th traffic jam of China. So a smart city helps Goal 11 and also for climate action. Here is another example used in China to use big data to help to mitigate to find out the leakage of the water pipes and to help to decide the city more resilient to the climate change. So just a few examples shows that how digital technology can help in different sector, in different country, in different region to help real actions to achieve sustainable development goals. So finally, I would like to stress the importance of cooperation. Because the Goal 17 is the partnership. Nobody can refuse the importance of the partnership. But here I would like to stress the cooperation should happen in interdisciplinary and intersectoral and international ways. So, for example, building information modeling and geospatial engineering is now widely used in construction area. However, these technologies are deeply rooted in different areas of engineering such as internet information communication technology, construction, internet of things and big data with applications in the management of resources and utilities. Telecommunications, urban and regional planning, routing of vehicles, parcel shipping and so on and so forth. They hold great potential to the support of sustainable smart cities. So all these things should work together with digital technology because digital without disciplinary border. And here I show you the China Association of Science and Technology. In short, CAST is a platform for interdisciplinary collaboration within China. We have natural science, industrial technology and engineering, medical science, technology and engineering, agriculture science, technology and engineering and interdisciplinary institutions. Totally, there are more than 200 disciplinary-based institutions representing more than 40 million scientific, technological, engineering professionals. So this platform is idea for interdisciplinary collaboration but also for international collaboration because we have the consultative status to the United Nations. So we have worked closely together with IGF and we try to closer our collaboration. And another example is the World Federation of Engineering Organizations. Now I serve as the immediate past president. And this federation consists of more than 100 so-called national member organizations such as CAST is the Chinese national member of this federation. So our federation is a comprehensive engineering professional organization representing tens of millions engineering professionals across the world. And we are keen on to collaborate with IGF in the near future more closely to work with you all together for celebrating the dual transformations towards a sustainable future. I stop here. Thank you very much for your attention.

Xiaofeng Tao:
Thank you, Professor Gong. I appreciate it. Our second speaker is Professor Liu. Professor Liu is a director of global change research, data collection and repository. Professor Liu is also a professor of Institute of Geographic Science and Nature Resources Research, Chinese Academy of Science. Her topic is about open science for green and digital transition. Professor Liu, you have the floor.

Liu Chuang :
Thank you, Doctor. And I’m glad to be here and share the information with you. Just like Professor Gong said, we are mid-time, mid-term of the SDGs, so we need to accelerate for the actions. So now the challenging, what the challenge is, so now we are mid-term to the 2030s, so less than half of SDGs in the world realized, actually. So climate change, natural disaster, and the COVID, and all impact SDGs, especially in the mountain area, in the small islands, and the critical ecosystem regions. So this is what we next step. So what is the objective next step? So the only one thing is accelerate to the SDGs. We need to focus on the effort, and we need to work together for this target. So what’s the solution? Everybody has their own different organizations, have different… Thank you. So the solution is we need open science. This one. Okay. Oh, this is challenging. Actually, I already said. Accelerate to the SDGs. The solution is open science. The technology also, but the science need a link to the technology, so big data and the Internet of Things, and the link to the engineer, not only science, technology, but also engineering, and working with the cases, and not only talk, but we need to start in the even small village. So also we need together with the systematic management, and then culture diversity. So this is the solution. And so we need to cooperate together, all partners. So in this idea, so in China, we start a new project we call the Geographical Indications, Environment, and Sustainability. Strong name is GIES, and this is a decadal program from 2021 to 2030. So this is if we do this, we have infrastructure. So there was a background, so there is a World Data Center, we call it Global Change Research Center. such data publishing on the reportory, this World Data Center. And then open data, open knowledge, open the geographer site, let the people to visit this to understand what you are and what you are doing. And this infrastructure got the VCS prize in 2018. So, and then technology, we need to dig into the big data and the Internet of Things and then make this, how the product is sustainable. So we give the identifiers and the DOI on the digital object identifiers and the science and technology identifiers and the global change data and the World Data Center IDs. And also have a trademark, give you a trademark and then the quick response system. And the people can very quick to find where your product come from. And then GIS, we take the network, that is the internet. We internet in the internet, wireless and the wireless. And then we have this, sorry. Then data publisher articles and products and then even local observation stations and also the package for the product packages, all linked together. So this is a network in everything together so you can trace where the product come from, what the quality of the product. So in China now we have, there are last two years, we have 17 cases in whole different regions of China. So this is, there are many different kinds of things. So rice, maize, and the bird art and the apple and many, many different agriculture products and the high quality also. So the benefit, one of the benefit, so right now more than 600,000 local people get, farmers get benefit. They got the income, more income. And then customers, for example, maize is one of our customers. We are happy we got a high quality product. So we, because we have some little money, a little bit more money to buy that, but we have less idea which one is good. But from this project we can identify which one is a good one. It’s wider for me to buy that, to spend money. Many people like me go that. And also contributors got benefit. Money is a scientist and a government officer got credit. And then we have how to organize these kinds. We have different partners. So for we have the scientific committee and the program committee and the company programs and we work together. And then the key player, that is the Geography Society of China and the Institute of Geographical Sciences and Natural Resources in Chinese Academy of Sciences. And then in the very beginning, we have 40 partners to join this program two years ago. But now we have 101 partners organizations to join this and they are very, very happy. So now, and this is a work is, and got a great deal from FAO. And FAO started in the new program as a one country, one priority product in the world. And then we work in the FAO, it was this. And then just a few in Bangladesh and several countries in Asia-Pacific, we start this program and they’re very, very, very welcome by different countries. So also we also support not only developing countries, but also industry countries. We work with the United, in the European Union in the Geography Indications Corporation. There, with China has agreement. So we have these ambitions and the change in the products from European, they are good wine and the China has a good tea, you know, they’re changing this. And then both of us need to, what the data is, what the quality is, where it come from? How about the culture? How about the socioeconomic development? And how make this a sustainable development? So the both information can open and end. So the summary, the GIS, the Innovative Methodologies, keyword is open science and the multi-stakeholder engagement. But the open science, not only science, but the link to the original geolocation and its environment, and the link to the product value chain and the open science methodology and technology and the engineering management in the geographical culture. Thank you very much.

Xiaofeng Tao:
Appreciate and thank you for your presentation. The third speaker is Ms. Tomoko Doko, the President and the CEO of Leisure and Science Consulting Limited Company. Her topic is on wildlife management in Japan for a sustainable future. Ms. Doko, please.

Tomoko Doko:
Thank you, Chairman, for introduction. Okay, my name is Tomoko Doko and I have a PhD degree, but I also, I got a hunting license in Japan after that. And today I’d like to talk about something about wildlife management in Japan for sustainable future. Okay, let’s talk about the general background of Japan and wildlife. As you can see in the picture on the right side, Japan’s main island consists of four primary islands, Hokkaido, Honshu, Kyushu, and Shikoku. And there are 97 terrestrial mammals in Japan, including 38 endemic species. Endemic species means the species only exist in Japan. Like the picture in the left side, the center one is Japanese sorrel. That is one example of endemic species in Japan. And let’s talk about wildlife management, why we have to focus today. Because wildlife management is a management process influencing interactions amongst and between wildlife, its habitat, and people to achieve predefined impacts. It attempts to balance the needs of wildlife with the needs of people using the best available science. Here I introduce the most important Japanese law related to this issue. The official name of the law is Wildlife Protection, Control, and Hunting Management Act. This act conducts programs to implement for protecting and controlling wildlife and manages hunting in addition to protecting and controlling wildlife by preventing the risks related to the use of hunting equipment. There are three main components related to this act. The first one is control of population. That is the main purpose of today’s topic related to that we need to reinforce capturing. Second component is management of habitat of wildlife. The third one and the last one is countermeasures of damage prevention. Today I introduce two species of Japanese mammals. One is Shikadiya and another one is wild boar. Those two mammals make troubles in Japan. And what kind of troubles is, for example, in the crop damage, another example is forest area damage. Both Shikadiya and wild boar make significant damage in two domains. This is just example of the picture how the Shikadiya make damage to cropland or forest. The geographic distribution of two species become a critical problem in Japan. As you can see in this graphic from 1978, both species tended to have expanded their geographic distribution. Therefore, Japanese government decided to change the law. So revision of the law was done in 2015 and new goal was set up. The background why we need to do that is negative impact of ecosystem and crop damage by Shikadiya and wild boar has become more severe and we can’t ignore anymore. And people who can do population control of Shikadiya and wild boar has become reduced due to hunters, population decreases, or aging. Therefore, new system of certification of wildlife capture program so that the government can reinforce more capturing and grow up next generation of hunters is implemented. And Ministry of Environment and Ministry of Agriculture, Forestry, and Fisheries did set up a new goal that is by year of 2023, the government of Japan aims to reduce the population of Shikadiya and wild boar to their half of the one in the year of 2011. This is structure of the act and due to the time constraints, I focus on the second component, what is control of capturing wildlife. As you can see in the illustration, Shikadiya and wild boar are designated as wildlife species for control capture program. Very briefly, I introduce what countermeasures we should do. Either we do gun shooting or trap hunting. For the trap, as you can see in the picture ABC, there are three types. And for example, for the wild boar’s case in picture A, wild boar walk without noticing the location of the trap and then the leg can be captured by wire. The second one B is there is a box and we use a bait to attract wild boar or Shikadiya and when they touch the bait, the trap box will be closed. And the third one is a larger scale box type trap. The picture in C is kind of a small one but it could be much larger like 10 or 15 Shikadiya or wild boar could be inside. Then for the digital transitions and green transitions, there are some ICT systems and technologies were proposed and implemented in Japan. These are the examples. Basically, there are three main technologies using, for example, drones are used to monitor habitats or used for sensing technologies and remotely monitor the system like in the forest, for example, deer and wild boar are passing in front of the sensor system then they can report to the user directly through the wireless network. And also the last one is, for example, the system to count numbers or identify animal types so they can differentiate the Shikadiya or wild boar or how many wild boar are there. Then for this case, they can choose the timing when to close the door of trap by this kind of ICT technologies. This is for the current situation and the future. So far, we are doing good. The population of two species tend to be decreased, but we have data until 2019, so we don’t know right now. We should continue to reinforce capturing. Thank you very much.

Xiaofeng Tao:
Excellent presentations, thanks. Our first speaker is Mr. Kremers from Codata, Germany. His topic is on digital twins in action, complexity management, including process models and workflow standards. Mr. Kremers, you have the floor, please.

Horst Kremers:
Yeah, thank you very much for the introduction. Dear colleagues, best greetings from our colleagues from early morning Berlin time, Germany. And I’m very sorry not to have the opportunity to be with you in Kyoto because it’s a fantastic city in Japan. I have been there myself and I hope you enjoy the time also besides IGF conference in the city. My topic today is on digital twins in action, our complexity management, including process models and workflow standards. There are some words in that title that I personally hope that even in the sum up of in the discussion, we may find opportunity because there is a little bit of deficit what we are doing the last years and what we need to do next years when the complexity of what we handle becomes even much larger than we ever thought before. I’m working in the sustainable development goals in resilience topics in disaster prevention and disaster information management, urban information systems. And in that combination, there are, my background is in disasters and hazards. That is what can happen to our environment and to our fellow citizens. We have to try to do our best to be better in many things as that introductory presentation by Professor Kegong very convincingly stated. We’ve seen this and AI to support our society at large. …newspapers, then you see that our life is not very easy because we have to deal with all these things. There are certain facets for dealing with complexity issues, certain facets of urban resilience where we start not only from collecting data and see what we can do with the data, but we have to do a strategic approach, see the whole problem, and then look into the details how they fit together. We start from a holistic approach to information management for intelligent cities and smart cities, what we call it, which is characterized by societal demands, current problems and challenges in technology, financing, and so on. More advanced requirements of urban infrastructures, advanced requirements that we have been working in urban infrastructure since minimum 30 years in 3D. That is when 30 years ago, 25 years ago, we started with 3D urban models, and of course that is now very much advanced in technology and gives also much more problems in information management. There is technology behind that, laser scans becoming better and more sophisticated day by day. Then from the topics of safety and security, we have internal security, which is the security of citizens in the city. disaster prevention, that is what happens regularly. We cannot avoid the disaster, but we can better prepare our citizens to have not so many deaths and also not so many big loss of damaged infrastructure and so on. Ecological and climate perspective, social, sociological issues in the city, and fractionalized production and supply chains. This is one of the major things which makes a city being live, active, with the perspective that everything can happen, but it is not just facts, you see, not just static facts, but it also deals with what happens, and what has to be done. Other facets of urban resilience is gross agglomeration, and you see the problems of the real big cities, which in other parts, not only in Europe here, but much larger cities are in other countries in our world, and there are development problems, there are ecosystem problems, where we have to deal with health and things for our citizens in the city, and what I want to take your attention to is urban ecosystem services, that is the sustainable development principles of not only seeing what is the ecosystems, but what does the ecosystem do for our citizens, make life enjoyable, and have service for urban development, and so on. So this new topic, which came up two or three years ago only, very sophisticated systems of ecosystem services. I come back to these things also when I touch the aspect of processes. Without taking too much time to read, the last point is intelligent transport systems. That is where actually at the moment, there is here in Europe, also in Germany, very much work done in transport, railways transport, air transport, and so on, and much money and development put in intelligent transport systems, with a lot of detailed sensor systems all over, and that is a real stream, streams, multiple streams of information coming just in time. We have the notion, we come from from 3D urban models of different stages of granularity, now to the term of urban digital twins. Now digital twins is not about robotics, but it’s about having different, more sophisticated digital representation of what we call the urban sphere. That there are certain principles to start from the beginning, it’s common good, there is a value behind that, we have to deal with quality, adaptability, openness, security and privacy, curation, standards, and totally have federated models. These, in principle, these principles are not really absolutely new, but they have to deal with what I call much more massive information that becomes available. Granularity comes down to the fraction of millimeters, and information goes not only on top of the landscape, but it’s in the landscape, and it’s in the ground below the landscape. In urban terms, all the pipes and things below the surface of the urban infrastructure is absolutely important, together with all their function, water pipes, sewage systems and so on, metro systems, underground tunnels for transport and everything, to have these things with at least the same high-level digital principles of what we did on top of the ground, with the 3D, typical 3D models, that is something which is a challenge, also for the future, but not only challenges, but of course also perspective of doing much more of organizing our common space in the city. Herein, we have to organize these massive data and active data streams which come from censoring certain aspects in all these systems. We have to organize this in data spaces, and I just give you an overview of the recent data spaces that the European Union is working in, and that is from manufacturing to Green Deal, mobility, health, financial energy, and so on, and on the right of the slide you see smart communities, also mentioned for our action space. This is an open system where, of course, much more other spaces can be joined, but here is a lot of activity going, especially in mobility, the third entry from the left, mobility is, as I said, of absolute priority at the moment, and you see all the kind of industrial ecosystems that deal with that is from construction, tourism, textile, proximity, automotive, health, and so on, and that is what we don’t have, actually, we may discuss that later, we don’t have real good means how to deal with that complexity of information. The ontology of urban digital twins, that is kind of a common conceptualization, and the digital semantic models and procedural models what we are dealing with, that starts with terms, properties, identity, status, annotations, role, causalities, semantic networks, which is more or less known as a principle, and nevertheless we have to do much more with procedural networks, because there is action in the digital twins. The most challenging difference is that we don’t have only static facts, but as I said previously, all these things are on the fly, with sensors all around, information streams coming from every side, and the whole thing is not just for presentation, full stop, then you have the presentation, anyone else can take it, the presentation, but now the whole system is for direct steering the city. I’m not favoring a direct robot working behind that, nevertheless, for traffic management, traffic light management, traffic optimization or something, there is a lot of interactive connection to physical and digital, between physical and digital, but not in all cases that would be possible, but for handling this, we need procedural networks. That doesn’t happen just by chance, but we have to do models and discuss the models of this, and the ontologies that we set up would have, as ontologies on the ontology level, capabilities of comparison, different ontologies globally and through different cities, because at the moment, a lot of different proposals for having ontology of urban cities is on the table. There’s not yet a real international standard for it. We have to compare these ontologies, we have to do that automatically, because it’s so complicated systems. Imagine a data management plan for the whole digital urban twin, and so that is rather complex, and for comparison we need that automatism to do it. We have to function of union of ontologies, we have different subdomains which we model first, and then we have to merge these to get to a more complete holistic aspect of ontologies of the digital twins. We have to do generalization in the ontologies, because we have to deal with technical detailed structures, and we have to support upper management in the city, doing for decision support on very different organizational levels in the city. Coherence analysis, that is the question of, is that ontology and the details of data stored coherent with legal boundary conditions, with financial boundary conditions, with ethic boundary conditions, and so on. So this list is large, there is discussion in detail for this, but we don’t have the time at the moment. We have to homogenize the terminology, we do work on the formats and meta-information. Nevertheless, the most important thing for the future would be a new standardized workflows for standard operating procedures in this big information flow. At the same time, for doing these logistics just in time, we need to do something not just right just in time, but we have to implement just in time. This implementing just in time is the thing that we also have not very good experiences at the moment. Behavior models, the challenges, I come to the end of my presentation. Beside cloud computing, that is all things that is discussed, but also in the end, you’ll see also my hint to implementing just-in-time demands. This is absolutely new science needed behind that. Recommendation of action is, we have to record, work with scenarios, work on complexity management. We have absolute deficits in doing models of complexity management, sorry, but we have to, that is really urgent that we do about it. And for the full management, you see also the entry on the right side, audits, to have independent control of plans, implementations, and does it work, does it have the effect, does it reach the goal, what is planned for. Thank you for your attention, and I’m looking forward for the discussion, and here you have the download link of the presentation. I have more material for those who are interested in digital twins. I am very happy to have direct contact later. Thank you.

Xiaofeng Tao:
Thank you. And our fifth speaker is Professor Ricardo from Mexico. His topic is on challenges and commitments in digital technology and a sustainable environment according to the United States-Mexico- Canada agreement. Let’s welcome Professor Ricardo from Mexico. Thank you very much.

Ricardo Israel Robles Pelayo:
Thank you very much. Hello, everyone. I would like to thank Dr. Liu, Dr. Tao, Dr. Tomoto, and I would like to say hello to Keogh and Dr. Horst. Well, I would like to talk about the challenge and commitments in digital technology and sustainable environment according to the United States-Mexico-Canada agreement that is called USMCA. First, I would like to show how the legal framework is formed in Mexico, since it’s important to efficiently address the issue of challenge in the green and digital transitions towards a sustainable future. In general terms, our Mexican Constitution stands as the highest legal system followed by the international treaties, federal laws, and local legislation, along with the official Mexican regulations. In human rights, both our Constitution and international treaties occupy a place of equal importance, ensuring the protection and promoting of human rights in accordance with pro-person principles. Within our Constitution, Article 4 expressly recognizes that every person has a right to an environment adequate for their development and well-being. This underlines the relevance of environmental sustainability on our fundamental legislation. At the international level, Mexico has signed 62 international instruments on environmental matters, including notable events such as the United Nations Conference on the Human Environment in Stockholm and the United Nations Conference on Environment Development in Rio de Janeiro, Brazil. Among the most important treaties, USMCA seeks to establish a framework for economic and commercial cooperation between the three neighboring nations. Although the USMCA does not address specifically the use of digital technology in the environmental sustainability, it is undeniable that these two areas are crucial to the future of our societies and economies. Chapter 24 of the USMCA focuses on the environment and establishes goal to promote the protection and sustainable management of natural resources. This includes a commitment to create and effectively enforce environmental laws and comply with the international environmental agreements to which we are a party. Although the agreement addresses uses concerning digital technology, e-commerce and data protection, it is important to consider specific technological aspects with application in the environment. Harmonizing regulation and policies around these issues is a crucial challenge. Regarding the Mexican national legal framework, the environmental law, which is called in Spanish, Ley General de Equilibrio Ecolรณgico y Protecciรณn al Medio Ambiente, in Article 5 promotes the application of technologies, equipment and processes that reduce pollution and promote scientific and technological research in favor of environment. As I mentioned at IEF 2021, authorities and civil sectors should consider using big data to generate and use cleaning and renewable energy. And the question is, what can we say about the use of artificial intelligence that is discussing during the current IEF in Kyoto? Well, well, I think this is important to know. it’s operation and applicability to protect the environment. Nowadays, the demand of sustainable and efficient electrical energy is an unavoidable priority. In this context, AI emerged as transformative tool that can revolutionize the generation and management of renewable energy. AI, through big data analysis and autonomous decision-making can improve the efficiency and reliability of renewable energy source such as solar and wind. As we can see, we have a solid legal framework to address the green and digital transitions towards a sustainable future. However, we need more. In addition, it is essential that authorities join together and collaborate closely with private business corporation, civil society, and academia to achieve the national and international levels. Moreover, with the support and advice of world experts like my colleagues in this workshop, we can, on one hand, learn from their experience and on the other hand, exchange ideas and strategies to build a green and sustainable world with the support of the technologies. Only by working together, we can take full advantage of these political and legal instruments and build a sustainable and equitable future for all. In addition to above, some of the challenge where technological strategies must be implemented are reduce greenhouse gas emission and in all economic sectors, promoting the adoption of more clean technologies as sustainable practices. Second, to continue with regional and global cooperation and investing in digital infrastructure thus facilitating the transition to a digital and sustainable economy. Third, ensure equity and social justice in the transition through training and skills development so that communities affected by changes in the industry can fully participate in the digital and green economy. And final, continue working on the harmonization of standards and norms related to technology and the environment between the three countries thus facilitating trade and cooperation in areas crucial to the sustainable future of North America. Thank you very much for the invitation again.

Xiaofeng Tao:
Thank you, Professor Ricardo. Maybe there is a technical problem. So our sixth presentation, the presenter is offline for the time being. So we move to open discussion. First of all, I have three questions for our expert. After that, let’s see any onsite or online participant have some questions or comments. There’s three questions. So the first question focus on key challenges on the government issues. The second, focus on strengthening the cooperation among multiple stakeholders. And the third, focus on policy framework, policy guideline, regulations, something like this. So I hope our expert select maybe one, two, three, or please. First, Professor Liu.

Liu Chuang :
Yeah, I think there’s a challenging and for the future under the new transition, I think challenging is, I think challenging is how to make the weak people developing countries at least and especially for the mountain areas, small islands, countryside, villages. There’s so weak people. We need to call all communities, government, international organizations, pay more attention to these people. These people need the help. They really need this hungry free, poverty reduction for disaster, you know, save. So I think the challenging is how we can pay more attention to these people. Not only cities, not only rich people. So this, from my experience, working with these guys, these people, I pay more attention to this, I think. So challenging is science, technology, ICTs, everything, commercial issues, whether we can work together in a best way for them. This is my opinion, my experience.

Xiaofeng Tao:
Yes. Thank you, Professor Liu. Professor Ricardo.

Ricardo Israel Robles Pelayo:
Well, I would like to ask the same question. And I’m going to talk about the law point of view. And as I say, it’s important that the authorities get actively involved in the international forums such like this, the IGF. In fact, I am especially happy because some Mexicans parliamentarians who are interested in the internet governance issues attended this IGF in Kyoto. Without any doubt, this is a great start. Now, it’s important that they do an excellent job in materializing the creation of laws and their due promotion to achieve the goal of taking advantage of the digital and green transitions for sustainable development. Thank you.

Xiaofeng Tao:
Thank you. Miss Tomoko.

Tomoko Doko:
Okay, about the question A, the challenges, I would like to give my opinion to probably the developing countries and industrialized countries situations are completely different. But how I feel now is, for example, the government officers, scientists, and private sectors, there are many people who work on those issues very seriously. However, they tend to work independently. In a way, I feel they work separately in isolation sometimes. In that case, what I feel is some people or organization who can bond these people are lacking. So the people or organizations who could function like bridge or bond will be necessary for future as a challenge in my understanding. Thank you.

Xiaofeng Tao:
Thank you. And Mr. Kremers.

Horst Kremers:
Yeah, I think on governance issues is something where in science, we lag behind. There is a lot of new methodology, not only develop existing methodology, but new methodology needed for complexity and processes. But science is not working alone. As also other speakers, I’m also interested to learn more from Ricardo experiences in Mexico. We have to deal with the administrations people. And I say they are, as I know here from Germany, for big data and complex data, administrations are not so really well equipped. And sometimes it’s an educational setting also needed. So how do we need the needed competencies? Because after science experience and it works, the whole thing normally goes into administration for operational long-term systems running for the service of citizens. That is not only a scientific part. And these kind of governance needed to set these up in a participative mode, as Ricardo also said, we are not only with the government, but also with citizens. Citizens are not general citizens. There are engineers, there are doctors, there are health specific agencies and so on in the service of people. These kind of actors need to discuss with us. And that is what we need to support. Thank you.

Xiaofeng Tao:
Thank you. Please close this one, this presentation. Yeah, so to answer participation, do we have any questions or comments to our four speakers today? Please.

Audience:
Yeah, hi, good afternoon. My name is Tarek Hassan. I’m the head of the Digital Transformation Center Vietnam. I’m working on behalf of the Federal German Ministry for Economic Cooperation and Development, GIZ Vietnam. My question is to Tomoko-san. I was very inspired by the work you do, since we also focus on facilitating the green and digital trend transition. I was wondering more on the ministry collaboration, because I think you mentioned two ministries, the Ministry of Environment and the Ministry of Rural Development, or some sort of ministry focused on biodiversity. Sorry that I don’t have this name in the top of my head. But I was wondering what the division of labor is also with the role of the Ministry of Internal Affairs and Communications. So is this more within the sort of jurisdiction of the Ministry of Environment? Is also the Ministry of Internal Affairs and Communications of Japan also working on biodiversity issues? I think this bounds back to the question of, do the green folks work on digital, or do the digital folks work on green? And what are the sort of collaboration mechanisms surrounding that? Thank you so much.

Tomoko Doko:
Thank you for your questions. Maybe I can show my PowerPoint again. Zuwei, could you show my PowerPoint? PowerPoint, please. Around the page nine. Page nine. Okay, the two ministries you are talking is first, Ministry of Environment. That’s called MOE in English, in short. Another one is Ministry of Agriculture, Forestry, and Fisheries. In short, we call it MAFF, M-A-F-F. Page nine, please. Okay. Maybe I can control it. Okay, yeah. And what’s a different function of these two ministries? MOE is, for example, what I introduced this law, the Wildlife Protection Control and Hunting Management Act that was proposed and revised under the authority of Ministry of Environment. And what is another ministry, MAFF, is doing is they are in charge of agriculture. And they have the land, for example, if their land belong to the government, we call it National Forest. They are in charge of National Forest, too. So MOE and MAFF have a lot of overlapping issues, especially about this Shikadia and wild boars population control, they need to collaborate. Due to the time constraints, I did not introduce very much in detail. But basically, there are the consultation between two ministries. The introduction I did about this new goal, new goal was set up together by two ministries. So this is a common goal by two ministries and also common goal of Japanese government, too. And also inside, how say, under the ministry, there are the prefectures. And under the prefectures, there are the cities and villages, too, in Japan. So how to collaborate is, I did not explain very much, but in this figure, red color means the country’s work. And blue color means prefecture’s work. So they work together in some domain, like I mentioned the second component, control of capturing wildlife. Government should do something and the prefecture should do something together. But some work is divided, how say, independently, too. Yeah, so basically, a country prepares, how say, basic guideline and law. And under that, prefectures do the practical programs. The implementation of programs will be done by prefectures. That is how they collaborate each other. Did I answer your question?

Audience:
The question that for us is really interesting is how do you build up capacity within the ministries? Because you mentioned IoT devices that are being deployed. So is that technical capacity for digital transformation you build within the ministries? Is that something that you outsource? Do you work together with the digital ministry? Or does the digital ministry actually have a role in that?

Tomoko Doko:
In my understanding, there is agency of digital ministry was newly developed inside of cabinet. But they are in charge of issues about, for example, the number identification of human, for example. And not doing, how say, this kind of work very much. The ICT techniques and technologies I mentioned and introduced today were proposed by private sectors. And so what we are doing is from the top level, high level, Ministry of Environment and Math collaborated together to set up a goal as a government. And for necessity, they sometimes need to change the revised act. And based on this revision, private sectors and prefectures start to work on it. I belong to private sector. So I got the certification of implementer of this program. So prefecture develop a program and we are implementing this program. So like that kind of collaboration is occurring in Japan.

Audience:
Yeah, we talk about the twin transition. So what kind of data could be accepted, for example, how do we know the data is good data or poor data we are using, to Professor Liu.

Liu Chuang :
Yeah, good question. So there’s big data in the society now, a bunch of data comes. But how to identify which data is trustable? Which data is good, can be used for your research or for your business? So this is a really challenging, a good question, thank you. And then, you know, there are data divided to different sources, some data from government, some from private sectors, and some from the research, university research sectors. So this is, there are different policies, and then how to open there. And then, for research part, for most of the research part, I’m from Chinese Academy of Sciences. They are, in the whole world, we have their World Data System. World Data System is under the International Science Council. There are, totally there are 86 world data centers in the system. So there is a peer review for the data set for peer review, because all the data come from research part, different scientists, there are different ideas, different methodologies, and different results. So how to make sure this, that we should control, one is whose data, so whose author of the data. So there’s a, we need to protect original authors, where the data come from. And then, where the data, how the data produced, in which model, in which methodology, and make this, you need to have a curation about the data. And second one, we need to check, check what, check this data security. And different policy, different countries, different organizations, whether this is private or personal, security, business security, and so on. We need to check this. And then, we need to check the security, and then check that there’s a quality, data quality. So how do data quality, for geographer, I’m geographer, so geographer, there are different processes, table, test stand, and then the roster, geolocation, different resolution, and it’s very, very complicated, but we need experts, experts to review this kind of data. So there is, we got, our data center is Global Change Research Data Publishing and Reporting, based through the publishing methodology, and the peer review, got it, and the data could be very trustable. So we call this trustable. So I call you, and if you have data, you go to the World Data Center system. That data is, there is an international regulation, and put them, make this trustable. Thank you.

Daisy Selematsela:
Yes, the internet is a challenge where I am. Okay, go ahead, please. Yes, let me go ahead. Yes, please. Okay. Thank you, colleagues. What we want to highlight with you is how we actually look at open access repositories as an accelerator in enhancing our South African SDG hub. And these are all the same data. So I just want to move quickly, based on the time, that we are aware that the African leaders have responded to the SDG agenda by setting their regional priorities based on their common African position. And in post to that, we also looked at the African Union Agenda 2063. And this is what highlights us for sustainability issues. And we also look at the African Union Agenda 2063. And this is what highlights us for sustainability issues. And the African Union Agenda places prominence on research and innovation for sustainable development. And important development is the formulation of the SDGs with a universal recognition of the importance of quality education, especially in the Global South, which is on goal four. And when we look at the goal four targets of particular relevance to us who are in knowledge management and knowledge production, we look at repositories, data stewards, libraries, and information specialists, which is aligned to goal three, which is how to ensure the livelihood and the well-being of our population. So how do we come in from where we are with what we want to do regarding sustainable development and sustainability? We look at goal three and then goal four. And then goal four on education targets the issues around who are the actual role players, especially those who are involved in knowledge production, and who bears the responsibility on the complex and interrelated issues of accessibility and affordability of knowledge resources. And I just want to indicate that knowledge management and its impact on SDGs is highlighted within the four areas. We’re looking at availability, accessibility, acceptability, and adaptability. And here we’re looking at how do we actually facilitate sharing of information, accessibility, the roles that information literacy programs play, and acceptability, making available open source academic journals. Because if you want to address the SDGs, we need to be looking at all these things. The other aspect is the issue of adaptability. And here, how do we consider the training of researchers, policymakers, citizen science, and public outreach support to ensure the application of knowledge in solving the key global disaster health problems? And I would just want to indicate that we have the indicators that are key to us in the Global South, and especially in Southern Africa where we are. We look at the amount of research and development spent around the gross domestic product, which we spend 50% of our annual investment in research and development performed in South Africa comes from international partners. We also look at the indicator on qualitative measurement of use and access to ICTs, and especially now we’re looking at the fourth industrial revolution. Also, the ability to produce high-expert technology, and also the issue around higher education internationalization, because we know that our scientists and researchers and our postgraduate students, they are international, and they co-publish and so forth. And like what we are doing today, we are co-presenting. And also, we look at the indicators on the number of scientists the country produces and the number of patents that are filed in our country. And what’s important also is the number or the impact of articles published in highly ranked journals. And these are the indicators that are highlighted. And quickly, the influences of our indicators, especially in Southern Africa, is the issues around governments’ strategies on national research development strategy, the Higher Plan for Education, for example, the Plan for Science, Technology, Engineering, and Maths, which is the 10-year innovation plan from our Minister of Science and Innovation. And we also have our South African Open Science Draft Policy that’s also assisting us. So, Amodjar, I will now leave to my colleague, Lazaros, to touch base on how do we support the knowledge for sustainable development, and how do we capture the SDGs. Lazaros, you can come in.

Lazaros:
Morning, colleagues. I’m just going to touch quickly, because of time, on some of the things that we are doing. So, in South Africa, through the National Advisory Council, our strategy is to support repositories, as you can see here, working with universities, research councils, national facilities, institutions, museums, and others. So, these are basically where we are trying to ensure that they fall within the institutional policies that they are prioritizing, to ensure that they can generate content that they are producing, and link it into all the repositories that we can push. So, in terms of these, there are two policies, which the first one is research outputs, which each and every university produces. And then the second one is the creative outputs. This could be your film arts, visual arts, music, theater, design, et cetera. So, what we have so far tried is to ensure that all the universities have a repository, both for publications and data, and also through the University of Pretoria. There has been a project where we have developed the South African SDG app to harvest all the collections that are within the universities. This also has to be the issue that is raised, that we need to train our librarians to be able to index some of this content, to ensure that they fall within the SDG criteria, and through a taxonomy, a national taxonomy, that is also supported by the National Development Agenda of the country. So, if we look, I’m not going to touch much. So, the leverage for open access is through the library experts that we are also capacitating within the repository fields, and to ensure that the repositories also fall within the best practices around the world. And also, they have a choice, but most of the organized universities are using DSPACE as a software, and through DSPACE, we are able to collaborate to fix problems that can happen, and et cetera. So, these are some of the repositories that are in the institutions, and also, you can see the data repositories that are created so far from some of the research intensive universities, and also the OJS system. Thank you, Chair.

Xiaofeng Tao:
Thank you, Professor Daisy and Mr. Lazarus. I’m right? I’m sorry. So, I think Professor Zhou is a remote moderator, so whether there are any questions or comments from online. Professor Zhou.

Online moderator:
Yes, Professor Tao. I think I had a few questions online, but due to time restriction, I’m not sure if our speaker can respond to all the questions. There is a question from Andrej Khrushchev from Common Funds for Communities. He indicates global commodity value chain is very complex. Could the speaker speculate how technology could support the green transition for security, energy efficiency? I think also, Halster raised his hand. I don’t know if he has any response or any question. I’ll pass the floor to the onsite chair, Professor Tao, to you. Yes, thank you.

Horst Kremers:
Just a short remark, because such questions are unusual in our normal working groups. There are professions around, as Andrej represents, which would be needed to join the whole thing for all these consequences of what we are doing, not just collecting data. We are doing these processes for certain purposes. And as Andrej said, for green transition, for food security, energy efficiency, and so on, transport efficiency, all these data spaces that I mentioned in my view graphs, they come together and we have to find out how to put them together. There are models in food security, there are models in energy efficiency. In the other view graphs, I said we have to join these ontologies. This is a problem for itself, and I hope to stay in contact for doing more in that direction.

Xiaofeng Tao:
Okay, thank you. I think there is no more questions right now. Because the time is limited. Yes. Okay. Okay, thank you, Professor Zhou. Due to our limited time today, and all of the speakers presented many excellent points of view, I might need maybe another one or two hours to conclude. So this is the end of this workshop. And we want to extend our most profound appreciation to all the experts for their expansional presentations, to both on-site and online participants for their insightful questions, and of course to organizers whose dedication and tireless effort make this workshop a success. Thank you very much. Thank you. I would like to call all of you to come here. We get together to take a picture. Okay? Very good. Good. Come on. So maybe we get to know each other, and next year we can meet again. Thank you. I’d like to take the opportunity to give my special regards to Liu Chuang, for we worked together now more than 20 years in these topics, and I hope we can do so for the future. Thank you. Thank you.

Tomoko Doko

Speech speed

140 words per minute

Speech length

1724 words

Speech time

738 secs

Audience

Speech speed

170 words per minute

Speech length

336 words

Speech time

119 secs

Daisy Selematsela

Speech speed

164 words per minute

Speech length

766 words

Speech time

281 secs

Horst Kremers

Speech speed

123 words per minute

Speech length

2398 words

Speech time

1166 secs

KE GONG

Speech speed

112 words per minute

Speech length

1594 words

Speech time

857 secs

Lazaros

Speech speed

136 words per minute

Speech length

396 words

Speech time

175 secs

Liu Chuang

Speech speed

138 words per minute

Speech length

1749 words

Speech time

763 secs

Online moderator

Speech speed

101 words per minute

Speech length

108 words

Speech time

64 secs

Ricardo Israel Robles Pelayo

Speech speed

124 words per minute

Speech length

990 words

Speech time

481 secs

Xiaofeng Tao

Speech speed

134 words per minute

Speech length

636 words

Speech time

284 secs

GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Alan Paic

The Global Partnership on Artificial Intelligence (GPAI) is an international initiative focused on promoting the responsible adoption of artificial intelligence (AI). Established in 2020, GPAI currently has 29 member countries. Its mission is to support and guide the ethical and trustworthy development of AI technologies.

GPAI operates through a multi-layered governance structure comprising a ministerial council, executive council, steering committee, and a multi-stakeholder experts group. The ministerial council convenes once a year, while the executive council meets three times a year. The current lead chair of GPAI is Japan, with India set to assume the chairmanship in the future. This multi-level governance approach ensures that decisions are made collaboratively and with diverse perspectives in mind.

Project funding for GPAI is obtained through various mechanisms. Initial funding was provided by France and Canada, with additional contributions coming from GPAI pooled seat funding. In-kind contributions from partners and stakeholders are also welcomed to support project funding. This approach allows for a diverse range of contributions and promotes broad participation in GPAI initiatives.

GPAI is actively involved in a global challenge aimed at building trust in the age of generative AI. In collaboration with multiple global organizations, GPAI has structured the challenge into three phases: identifying ideas, building prototypes, and piloting and scaling. This global challenge seeks to address the proliferation of fake news and the growing threats to democracies. By fostering trust in generative AI, GPAI aims to ensure that AI technologies contribute positively to society.

Alan Paic, a strong advocate for GPAI, provides an in-depth overview of its governance, membership, and initiatives. His support for the project reinforces the importance of responsible AI adoption and the need for international cooperation to address the challenges associated with AI technologies. Paic also promotes the upcoming global challenge, highlighting the importance of building trust in AI systems.

In addition to GPAI, the Global Partnership on AI (GPI) has made significant contributions to the field of AI. GPI has played a pivotal role in regulating detection mechanisms in AI companies, emphasizing the importance of accountability and transparency in AI technology. The impact of GPI’s efforts is evident as some countries have incorporated GPI’s guidelines into their legislation.

Looking towards the future, GPI envisions becoming the global hub for AI research and resources. To achieve this, GPI aims to pool together global resources and expertise in AI. By bringing public research institutions together and collaborating with international networks, such as the worldwide LHC computing grid, GPI seeks to enhance understanding and advancements in AI technology.

In conclusion, both GPAI and GPI are major international collaborations that aim to promote responsible AI adoption, build trust in AI systems, and address the challenges posed by AI technologies. With their multi-layered governance structures, project funding mechanisms, and involvement in global challenges, these partnerships are crucial for shaping the future of AI in a responsible and ethical manner.

Audience

The analysis of the speakers’ statements reveals several important points regarding the Global Partnership on Artificial Intelligence (GPA) and its work. During the meeting, the GPA presented and discussed its work streams, generating significant interest. It was particularly noteworthy how these work streams were mapped against the Hiroshima commitments, underscoring the relevance and alignment of the GPA’s activities.

In addition to the mapping exercise, there was a request for insight into the GPA’s future work and thoughts on partnerships. This emphasizes the need for ongoing collaboration and clarity regarding the GPA’s direction and objectives. The speakers expressed a neutral stance on this matter, seeking more information and guidance.

The GPA’s efforts to address concerns and challenges in the field of artificial intelligence were highlighted. This includes ongoing interactions with a council that funds various projects. The council funds both ongoing and completed projects that aim to advance AI, with reports on project progress available on the GPA’s website, ensuring transparency and accountability. Additionally, the GPA seeks advice from experts in various fields to ensure the quality and relevance of their projects.

Gender diversity and equality in AI emerged as significant concerns during the meeting. Paola Galvez, a gender advocate, questioned the presence of activities related to creating diversity and gender equality in AI. This raised an important point about the need for inclusivity and addressing the gender gap in the field.

India expressed optimism about leading the GPA in the future and raised the question of whether there would be an initiative to bridge the gender gap in AI. This indicates a willingness to take action and promote gender equality within the GPA’s activities.

Peru, as the first country to have a law on AI for social purposes, expressed interest in becoming a member of the GPA. This demonstrates the broader international appeal and recognition of the partnership’s significance in advancing AI policies and governance worldwide.

Slovakia, a non-member of the GPA, is considering membership and seeks further information. Specifically, they are interested in understanding the specific regulatory support activities of the GPA and how non-members can participate in the upcoming India summit. This suggests a growing interest and potential expansion of the partnership’s membership.

The analysis also highlighted the issue of fragmented and limited public sector research on AI. The majority of research and development is concentrated in a few large private companies. This underscores the need for increased collaboration and knowledge sharing between the public and private sectors to ensure a more comprehensive and well-rounded understanding of AI.

The Global Partnership on AI (GPI) aims to address this fragmented approach by pooling resources and establishing partnerships with other international networks. The goal is to leverage the collective expertise and resources of all countries to have a greater impact on AI research and development.

Civil society and think tanks expressed keen interest in participating in the upcoming summit, showcasing their desire to contribute to the discussions and exchange of ideas. This indicates the increasing recognition of the importance of diverse perspectives and input in shaping AI policy and governance.

Finally, Ben, an advisor to the Westminster Foundation for Democracy, noted the challenges and opportunities posed by AI in election processes in the Indian presidency. This highlights the need for careful consideration and the development of strategies to address potential risks and harness the benefits of AI in these critical democratic processes.

In conclusion, the analysis of the speakers’ statements reveals various important points regarding the Global Partnership on Artificial Intelligence. The mapping of work streams against the Hiroshima commitments generated interest, while questions were raised about future work and partnerships. Gender diversity, membership expansion, public sector research, civil society involvement, and AI in election processes were also discussed. These insights emphasize the need for collaboration, inclusivity, and thoughtful governance in shaping the future of AI.

Kavita Bhatia

During the discussion, the speakers highlighted India’s vision for artificial intelligence (AI) and its potential to drive social and economic growth. They emphasized the importance of AI in bringing efficiency to administrative procedures, which in turn could contribute to economic growth. By automating various tasks and processes, AI has the potential to streamline operations, increase productivity, and foster innovation.

Furthermore, the speakers discussed how AI could empower citizens by providing them with easier access to their entitlements, thereby contributing to social growth. AI has the potential to bridge gaps and provide services to citizens more efficiently, improving their overall experience. This inclusivity was seen as crucial, particularly in a country like India that boasts a diverse linguistic landscape. The speakers stressed that AI should be inclusive and enable citizens to access services in their vernacular languages. In support of this, they highlighted the creation of a multi-modal AI platform called ‘Bhashani’, which facilitates speech-to-speech machine translation in multiple languages.

The discussion also delved into the significance of skilling initiatives in preparing for an AI-driven future. Efforts to inculcate AI knowledge at the school level were mentioned, underscoring the need to equip students with the necessary skills and knowledge to navigate the evolving technological landscape. The availability of financial support for PhD students in the field of AI further highlighted India’s commitment to fostering expertise and innovation in this domain.

The need for a Global Partnership on AI (GPI) was brought to the forefront during the discussion. The speakers emphasized the importance of GPI as a central point of contact for AI-related information, standards, and frameworks. India’s involvement in the GPI was highlighted, with the country taking the lead chair and hosting the upcoming summit in December. The aim is for GPI to have an independent identity, similar to that of the World Health Organization (WHO) in the field of health.

Finally, the speakers emphasized India’s AI approach of democratizing access to AI resources. This involves streamlined access to high-quality datasets, which are vital for research and innovation. Additionally, India aims to ensure access to compute power and skilled resources, acknowledging the significance of these factors in driving AI development.

Overall, the discussion highlighted India’s comprehensive vision and approach towards AI. By focusing on inclusive AI, skilling initiatives, global collaborations, and democratizing access to resources, India aims to harness the potential of AI to drive social and economic growth while reducing inequalities. The insights gained from the discussion underscore the need for a holistic and collaborative approach towards AI adoption and development.

Inma Martinez

The Global Partnership on AI (GPAI) has played a pivotal role in advancing the field of Artificial Intelligence (AI) between 2015 and 2018. During this period, AI experienced exponential growth and brought about significant advances in various areas, including neural networks, language models, computer vision, AI-driven drug therapy development, and level 5 automation in cars. These advancements have had a transformative impact on society.

GPAI emphasizes the importance of responsible and trustworthy AI. As AI technologies continue to evolve, there is a growing need to ensure that their development and use adhere to ethical principles and best practices in data governance. GPAI also recognizes the significance of fostering innovation in the future of work, highlighting the need to address the challenges posed by AI and promote responsible practices.

In addition, GPAI promotes the deployment of AI for industry and enterprise applications. Through a project that supports small and medium enterprises, GPAI assists these organizations in identifying suitable AI solutions for their challenges and finding local AI solution providers. This initiative aims to enhance the competitiveness of these enterprises by leveraging AI technologies.

GPAI also addresses concerns about intellectual property rights in AI. The organization has a project dedicated to this issue, recognizing the importance of creating a framework that protects and encourages innovation in AI while providing mechanisms for intellectual property rights.

The proposal to establish an expert support center in Tokyo has received positive feedback. This initiative aims to strengthen the support system for experts involved in project-based activities. Once approved, this center will provide valuable resources and expertise, further enhancing GPAI’s capabilities.

GPAI actively seeks partnerships and values decentralization to bring in as much external expertise as possible. By collaborating with research and innovation centers and specialists from various fields, GPAI ensures diverse perspectives and a multi-stakeholder approach in addressing AI-related issues.

In terms of regulatory activities, GPAI plans to organize workshops in an incubator style, covering topics such as contract laws and AIIP. These workshops, led by renowned expert Lee Tidrich from Duke University, seek to bring together specialists and encourage the exchange of knowledge. AI scientists and practitioners from any country are invited to contribute to these regulatory activities.

While acknowledging the risks associated with generative AI for democratic countries, GPAI remains driven by shared democratic values. This emphasis on democratic principles further strengthens GPAI’s commitment to addressing the challenges and ensuring responsible AI deployment.

GPAI’s projects encompass responsible AI and data governance to enhance democracy and protect human rights. The organization actively works on initiatives such as human rights projects related to data governance. By focusing on these areas, GPAI aims to utilize AI for the betterment of society, welfare, and the creation of equitable opportunities.

Overall, GPAI’s efforts in advancing AI, promoting responsible practices, supporting industry applications, addressing intellectual property concerns, establishing expert support centers, promoting partnerships, and safeguarding democratic principles demonstrate its commitment to creating a beneficial and ethically-driven AI ecosystem.

Yoichi Iida

The Global Partnership on Artificial Intelligence (GPAI) is an international collaboration aimed at promoting the responsible deployment of AI technology in society. GPAI has focused on a range of topics, including responsible AI, data governance, the future of work, and commercialization and innovation through AI technology. This comprehensive approach demonstrates GPAI’s commitment to addressing various aspects of AI and its impact on society.

To facilitate the exchange of ideas on the implementation of AI, GPAI has organized over 20 side events, providing a platform for experts, researchers, and stakeholders to come together and share their insights. These events have played a crucial role in promoting dialogue and knowledge-sharing among different actors in the AI ecosystem.

The collaboration between GPAI and other international streams has been deemed vital for achieving effective AI governance. Discussions on AI governance have been integrated into the G7 agenda, highlighting the importance of addressing the risks and challenges associated with AI on a global scale. This collaborative approach ensures that diverse perspectives and expertise are considered in shaping policies and frameworks for responsible AI development and use.

Recognising the need to strengthen GPAI, Yoichi Iida, a key advocate, believes in the significance of establishing an expert support centre in Tokyo. This centre would serve as a valuable resource by providing expert-level assistance to GPAI’s initiatives. It is noteworthy that the government is actively involved in supporting this proposal, both financially and through providing necessary personnel resources. This commitment further emphasises the importance placed on GPAI and its mission.

A proposed third expert support centre in Tokyo would operationalise the strengthening of GPAI. This new centre would play a crucial role in implementing projects and promoting the visibility and awareness of GPAI’s activities. Through this initiative, Yoichi Iida aims to enhance the understanding and perception of GPAI’s work, both within Japan and internationally.

In conclusion, GPAI is at the forefront of promoting responsible AI technology deployment in society. With a comprehensive focus on various aspects of AI and its impact, GPAI has facilitated knowledge exchange through side events and engaged in collaborative efforts with international partners. The proposed establishment of an expert support centre in Tokyo further reinforces the commitment to strengthen GPAI. Overall, Yoichi Iida’s efforts highlight the importance of responsible AI development and the need for global cooperation in shaping its governance.

Abhishek Singh

India is preparing to host the Global Partnership on AI (GPI) summit in Delhi from 12th to 14th December. The summit aims to become the leading platform for AI, bringing together nations, stakeholders, industry, and academia to discuss and collaborate on AI-related challenges and opportunities. In addition to the main themes for GPI and working groups, the summit will feature an AI expo and an AI game changers competition for startups. The deadline for startups to enter the competition has been extended from 15th October to 15th November. India is also working to expand the membership base of GPI in order to include a broader range of perspectives. They are engaging with the Secretariat and member countries to determine how GPI will be expanded. India has made significant progress in integrating AI into digital public infrastructure projects, such as the identity platform, digital payments ecosystem, and document exchange platform. They believe that AI can enhance the value and effectiveness of these projects, addressing challenges in areas like healthcare and agriculture. Collaboration is crucial for regulating AI and ensuring its fair and widespread application. India is collaborating with other nations and experts to create frameworks and guidelines for responsible AI use, addressing ethics, data governance, and other important issues. Gender bias exists in AI algorithms due to biases in input data, but efforts are being made to encourage more women to participate in AI skilling programs and balance representation. India recognizes the significance of collaboration in AI and is introducing a collaborative AI theme for the 2024 presidency, exploring shared compute infrastructure and datasets for research. While the GEP summit is primarily open to existing member countries, non-members are encouraged to participate in side events and exhibitions. India is willing to showcase their digital public infrastructure projects and AI developments to visiting countries. The country believes in sharing advancements and promoting international collaboration in the digital space. Overall, the GPI summit presents an opportunity to come together, collaborate, and shape the future of AI, with a focus on responsible and ethical development and deployment.

Session transcript

Inma Martinez:
Yes. Welcome, everyone, to Session 111, the Global Partnership on Artificial Intelligence, a multi-stakeholder initiative on trustworthy AI. My name is Inma Martinez. I’m going to moderate this session. I’m also participating as one of the panelists because of my role as chair of the multi-stakeholder expert group. I will make the introductions of the members of this panel. To my right, my colleague, Yoichi Iida, who is deputy director for the G7 and G20 relations at the Ministry of Internal Affairs and Communications at the government of Japan and who is also the co-chair of the Global Partnership on AI Steering Committee. And online, we have my colleague, Alang Paik, who is head of the GPAC Secretariat hosted at the OECD. And from India, we have the CEO of Digital India Corporation and India AI, Mr. Abhishek Singh, at the Ministry of Electronics and IT of the government of India. And we’re also expecting a fourth member of our panel, who is Ms. Kavita Bhatia, who is group coordinator of Emerging Technologies Division at the Ministry of Electronics and IT at the government of India. And the order of the session intends to provide you with a scope of what GPAI is as a multi-stakeholder initiative of international scope and running now in its fourth year. And each of the members of the panel in their capacity as co-chairs and as organizers of the next presidency will reflect on how GPAC is delivering value to the world, to the member countries, and what we hope for the future. And I would like to invite our colleague, Alan Paic, who is the head of the Secretariat at the OECD, who is online to start the presentation with an overview of GPAI as an organization. Alan, the floor is yours.

Alan Paic:
Thank you very much, Inma. And it is my pleasure to address you today. I will give an introduction to GPAI as a multi stakeholder initiative on trustworthy AI. So, GPAI is this long term initiative, which is specifically dedicated to AI related priorities. And it has a multi stakeholder focus to convene experts from a wide range of sectors. And the mission of GPAI is really to bring both countries on one side and experts from different stakeholder groups together to support and guide the responsible adoption of AI. And we know that especially in 2023, this becomes of very high relevance to ground, um, the adoption of AI in human rights, inclusion, diversity, gender equality, innovation, economic growth and environmental and societal benefit and seek to contribute concretely to the 2030 agenda and the UN sustainable development goals. This has been GPAI’s mission since its creation in 2020. And as I said, in this year, it became more and more evident how important this is, and how big the risks are, in, of course, in parallel to the huge opportunity, which is, which is given by the development of the technology. So we do have a global and inclusive membership, which is open to emerging and developing countries as well as developed countries. And this membership is informed by the shared principles which are reflected reflected in the OECD recommendation on AI, which is today also widespread. G 20 has also adopted a similar set of recommendation inspired by the OECD recommendations. So the GPAI members today, we number 29 members. And as I have mentioned, we do have an increasing number from emerging and developing countries, including Argentina, Brazil, India, Mexico, Senegal, Turkey, and Serbia, for example. But we also do have, as you see, most of the major leaders in in AI technology on board as GPAI members from the government side. Now, how does GPAI function, we do have a very elaborate governance, we have a ministerial council and an executive council, which are representative of the member countries, then we do have a steering committee, which is the place where the different stakeholders meet. So where we do have representatives of both the member countries and the experts. And then we have the multi stakeholder experts group of which in my is the chair. And this experts group accounts for expert working groups, and currently two expert support centers in Paris, and Montreal. And their work is also supported or will be supported in the future by the work of national institutes. So institutes which will contribute to the concrete projects which GPAI is putting forward. Now the GPAI council, as I mentioned, has two parts, it has the ministerial council, which meets once per year, and our next meeting is in New Delhi and and our colleague from India, Mr Abhishek Singh, will talk about this forthcoming summit, which is very important. The Executive Council meets at the working level three times per year and gives guidance to the GPI Secretariat on internal processes, on project financing, work plan, and approves the GPAI budget. So the Council this year, the lead chair is Japan. The incoming chair is India, who will become the lead chair for the next year. And the outgoing chair is France, who has been the lead chair last year. The Steering Committee, as I mentioned, is the place where all the stakeholders meet in this multi-stakeholder initiative. So we do have in the Steering Committee six representatives of the government and six representative of the experts. Within the representatives of governments, we have three representatives, which are the co-chairs of the initiative. We have two additional government representatives appointed by the Executive Council. And then we have a specific seat, which is reserved for an emerging or developing country, which is also appointed by the Executive Council. And this shows the commitment which GPI has to support membership from such countries. And then Steering Committee also does meet sometimes in the extended format, where all the GPI members are invited to participate in the deliberations. I think one important point is the project funding. How are the projects funded at GPI? So we do have the baseline budget envelope for projects, which has been historically provided by France and Canada, who were at the origin, the founding members of GPI. This is going to be complemented now by a mechanism through GPI pooled seat funding, where all the countries will be able to contribute. Then the second part of the project funding comes from in-kind contributions through National AI Institutes, who can contribute in-kind computing power, data resources, human resources, et cetera, into GPI projects. And then finally, we do have also partnerships, and partnerships can be from both in-kind and financial contributions to specific projects in the GPI work plan. I would like to mention also the GPI webpage, which I would encourage you to visit. You will find a lot of exciting information there. You will find our new reports. We have two new reports which are featured on our webpage, which are recent and very actual. One is about generative AI jobs and policy response, and the other one is about AI foundation models and detection mechanisms, saying that whatever new foundation model is put out there needs to provide a detection mechanism which would help us identify that that text has actually been produced by an AI. You will find information about the GPI Summit. At the GMI Summit in New Delhi, we will be launching many more new reports which are just being finalized right now. So watch this space for the new exciting results from the GPI experts. And also, you will find information about the G7 commitments to advancing generative AI policy where this collaboration does include GPI. And finally, you will find information about the global challenge to build trust in the age of generative AI. We know how fragile trust is today about the proliferation of fake news and big threats to our democracies and so on. So we do want to have this global challenge. This is a big initiative which is launched in collaboration with the OECD, with UNESCO, with AI Commons, IEEE, and VDE. And we are actually right now in a call for partners. So a call to all of you who are listening in today, if you are interested to collaborate on this very exciting initiative, please do click on this partner inquiry form just very briefly to explain what this is about. It is in three phases. In the first phase, we do want to identify promising ideas, how to have policy and technology solutions for building trust in the age of generative AI. AI. Those ideas which do show potential will get some resources to build a prototype in the second cycle, and the successful prototype then will be encouraged to pilot and scale in the third cycle. So it is a very exciting new transversal initiative with global partners. Please feel free to apply and partner with us. Thank you, Inma, and over to you.

Inma Martinez:
Thank you so much, Alan. I’d like to move now to a very important aspect of what the GPAE is, which is our mission and our vision. And I would like to invite my colleague Joichi Ida to present to us GPAE’s mission and vision and really what the Executive Council has done to address the emerging challenges that AI is presenting, with the dynamism that is required during the presidency of Japan, as well as how Japan in this year has been steering the GPAE mission in a very, very tough year and conflicting year with lots of work to do, especially from the G7 perspective of having to come up with a Hiroshima process. Thank you, Joichi. The floor is yours.

Yoichi Iida:
Okay, thank you very much, Inma, for your very kind introduction. And good morning, good afternoon, good evening, everybody. My name is Joichi Ida, actually formerly Deputy Director General for G7 and G20. At this moment, I’m working as Assistant Vice Minister at the Ministry of Internal Affairs and Communications. And my work is mainly looking at the multilateral digital policymaking, including GPAE, OECD, G7, G20, and also IGF altogether. And this is a very, very busy year for us. But at the same time, we are very happy to see different frameworks being synergized with each other, not only around AI but also other digital policymaking including the data flow or infrastructure and whatever. So with regard to the GPay operation, we took the lead chair position in the late month of last year and we are working at the lead chair for 2023. GPay has a very unique structure, not only in the organizational structure as represented by Alan in his presentation, but also in the process structure. The lead chair country hosts its summit meeting in the beginning of, on the very first day of the chair tenure, so it happened late November last year and on that very day we succeeded the lead chair position from France and we put our effort to succeed the successful GPay’s work and also even promote further its very important responsibility in promoting global AI ecosystem. Actually the GPay executive council, all member countries are working together on how to promote the responsible deployment of AI technology into the society through four different working group topics which are responsible AI and data governance and the future of work and the commercialization and the innovation through AI technology in the society. So we, the government members are closely working together with the private sector experts to promote those projects in four categories and we as a lead chair, Japan wanted to promote the uniqueness of GPA and also to improve the weakness of GPA. So in the beginning of our chair tenure, we recognized through our discussion in previous years some of the challenges for GPA would be first how to strengthen the messaging or delivering our message to rest of the world and in order to achieve that we decided to elaborate the first minister’s declaration to be delivered by the ministers at GPA summit last year. That is one instrument and secondly we wanted to add the very temporary topic for AI development as AI for resilient society. So we added AI for resilient society for priority topics for GPA activities and then we also wanted to strengthen the opportunity for experts to bring their message to rest of the world through the lot of events associated to the GPA ministers summit in the same venue in Tokyo. So we had more than 20 side events where many experts presented their own views and also do some exchanges on how we could work together to promote responsible deployment of AI technologies in our society. These are the three major focuses where our lead chair presidency worked on and I hope these three emphasis contributed to some extent to the development of GPEI work this year. And we also wanted to create a stronger synergy between GPEI work and other international work streams. That is why we picked up the AI topic as part of the G7 agenda this year and we discussed AI governance, global AI governance, as part of our working group discussion and we agreed on further work to build up the global governance policy for generative AI and in order to do that we have agreed that we should work closely with international organizations including GPEI and OECD and UNESCO to understand better how generative AI would support the human society and what kind of risks and the challenges may come up and how we could address those potential risks and the challenges through the collaboration with international organizations and experts working there. So I hope I stop there but these are the efforts by Japan as lead chair presidency and I hope these efforts will be succeeded by India to lead the chair next year and of course we, Japan, continue to make a contribution to GPEI work next year and beyond. Thank you very much.

Inma Martinez:
Thank you very much, Yoichi. As co-chair of the steering committee and colleague of Yoichi throughout this year, I have to say that the Tokyo summit was really probably one of the best events I’ve ever attended. I think as a scientist myself I felt that the sessions, not just the ones that the members of the multi-stakeholder expert group presented but also the peripheral sessions from academic and for industry that came to the summit, were truly exciting and really to the point of the times because AI these days grows exponentially, not linearly, and everybody has to react in ways that are much faster than before and with better solutions and forward thinking. So thanks very much, very grateful for all the work that Japan has done for GPEI this year and I’d like to proceed and present the other pillar of GPEI, which is the multi-stakeholder expert group, which is the group I joined in 2021 when the government of Spain, one of the members, nominated me and I entered one of the working groups which was innovation and commercialization of artificial intelligence for small to medium enterprises. And last year during our elections I then took the chair of chair. This is a very singular community because it’s the first time that the AI community of not just academics but also industry, AI scientists, lawyers, civil servants, organizations working on ethics, trade unions, have come together to work together on very very specific initiatives and this puts us in a position where we serve at the pleasure of the members but at the same time the members gives us the flexibility to proposed approaches. How would we as scientists, as AI people, would deal with some of these challenges? And this synergy is what makes the GPEI truly special and a first for many governments where it comes to AI advisory. And the makeup and the fabric of the MEC is quite varied. More than half of the expert group is composed of AI scientists, true AI scientists, and then we are complemented with other people from trade unions, for civil society, from industry that also have long careers in artificial intelligence. I would say the average age is above 40 for sure, if not more. And because the membership at present is with a huge component of European countries, well the countries, the members nominate experts, that’s why we have so many European experts in the group. But we are also expanding to bring experts not just from the membership but when we work on projects that require specific skills, we invite other members of the AI community to work with us as specialists. And this is one of the keenest efforts that we have for these years to come, which is to bring the voices of developing nations, of emerging markets, of other scientists that do incredibly valuable work that complements our own work. The gender gap is not too bad, I would say. We are making huge inroads in stabilizing this 15 percent difference because one of the points of governance that we have in the in the MEG, as we call it, is to ensure representation and not just between the genders but also geographies. Every working group has two co-chairs and we try to bring people from different continents. And what is it that we do with lots and lots of calls on video platforms because all of the experts are based in their home countries. And we are supported by the two expert support centers which are CEMIA in Montreal and INRIA in France. And this is probably one of the best elements of, you know, the governance of J-PAY that is decentralized and at the same time each member, in this case France and Canada, puts forth the centers of innovation and research to our advantage, to our support. And we organize summits. We have done summits online in 2020. We did it for the first time in person in Paris in 2021. We did it in Tokyo last year and we very much look forward to the summit in India this year. And as my colleague Alan mentioned, we engage with other organizations in common goals and projects like the Global Challenge which Alan explained earlier. Now why J-PAY engaged the AI community has a very simple answer. Between 2015 and 2018 the advancements in artificial intelligence grew exponentially. We had huge advances in neural networks, in the transformer modeling within language models. We brought computer vision, AI-driven drug therapy development, level 5 automation in cars. It was a huge movement of advancement in AI that really put a new perspective to almost 70 years of artificial intelligence. And that’s why the original founding members decided we need an association, an international initiative, that will be able to understand this massive exponential growth and transformation and governments to really get on a roadmap of dealing with it with innovative approaches. And J-PAY, because it’s an initiative, has a governance that is very singular. So we have a federated governance. Every country puts their support from their local leading institutions. And every member of the J-PAY from a country perspective brings the individual and collective leadership, exactly what we do in the expert group. And we look for multi-stakeholder equity, because we know that AI has to be an AI for all. And this is why not just the council brings new members from all geographies of the world, but the experts do the same. And one of the things that differentiates our projects with other projects around artificial intelligence in the world is that our mandate, how we have been mandated by the council, is come up with solutions, come up with real actionable solutions that go beyond policy. Yes, you can advise on policy, of course you do, but bring solutions that we can implement, that we can roll out in our global markets, and also find standards for all of us to agree upon. So the way that we understood that mandate, especially in 2023, when the emergence of generative AI really brought a new perspective and enormous challenges to society and governments, was bringing the members and the experts together. So this is something very singular. We created a town hall in May, in which the experts invited all of the member countries to attend. And we explained how we, the AI community, understood the risks, but also the opportunities for language models, foundation models, generative AI in general. And it was a town hall format. Anybody could ask anything. And we made it very free, free flow, so that for the first time it was a conversation, it was interactive. In September, we hosted the first innovation workshop for members and their delegations and our experts in CEMIA at Montreal. that you do in an innovation workshop was you challenged ideas, you address if everybody understands the same when we discuss, for example, risks. Are the risks for a member country in Europe the same as one in the Americas? Do we all prioritize the same challenges in the same way? So it was bringing the existing roadmap of challenges, risks and projects that the Council and the GPA expert group had, but we put it through the lens of are we all on the same page? Have things changed that we need to modify some of these assumptions and hypotheses? So we really behaved as artificial intelligence scientists and it was really successful because everybody felt that for the first time governments and experts were working together for two days in the same room, as you can see from the pictures, throwing ideas, challenging ideas and agreeing on approaches. And the way we operate is on a four-pillar structure. The big themes of artificial intelligence, as you all know, is to ensure that is done responsibly, that is trustworthy, that it carries the ethics of the future that we want for our people. We also concern ourselves with how the future of work will evolve with artificial intelligence coming into industry and society. And then the pillar of AI is data. So of course data governance is one of the most active working groups in the MEG and innovation and commercialization of AI. AI finally is becoming a product and a service, is coming into industry, is coming to the hands of people and we need to ensure that the service level agreements, that the human centricity by design really comes with it. As one of our experts mentioned, AI should come to us in a state that is already safe, that we don’t need to make it safe because it’s dangerous. We should really strive for an AI that comes to us in the best possible state. So how do we respond to the challenges that the member governments undergo on a monthly basis I would say? Well one of the big challenges was presented to us by the Hiroshima process in May. Together with the OECD we were called upon to support the vision that the G7 had as to how we needed to act quickly and steady and on a very solid ground with regards to generative AI, advanced AI. And immediately we looked around and we realized that we were already working on absolutely all of the points that were coming out of that mandate, out of that call. So as you can see from the list, obviously taking measures to ensure that the risks are met and addressed. We also need to mitigate vulnerabilities, you know, how it comes to market, what are the capabilities and the limitations that something coming to market in inappropriate ways would create. So the way we respond to all of these objectives is in a variety way. In some ways we produce ideas such as sandboxing for responsible AI. In others we look at what Alan mentioned, you know, detectors, real tech that actually addresses, you know, fake dissemination of misinformation etc. And as you can see from these columns, the elements of risk and concern listed by the G7, we already had projects operating in the different spheres of what needs to be done. And if you want to have a deeper view of what these typical projects are, for example one of the most exciting ones which actually has been presented in the EU Parliament and in fact is being incorporated into the amendments to the AI Act of February 2020 and also presented in the US Congress is, can we create detection mechanisms in order to ensure that when this type of AI is commercialized, it already comes with detection mechanisms that either people themselves can action and test, is this thing a fake news, but also the social media platforms. And this is real tech addressing a technological problem. It’s not a policy, it’s not a framework, it’s something very like an asset. And in the innovation and commercialization we have a project that has already entered beta which is a portal that has been launched in Singapore, in France, in Germany and in Poland as the beta testbeds in which small and medium enterprises of all sectors can consult which AI solutions are appropriate for some of their challenges, for some of their gaps. And not only that but who are the AI solutions providers at the local markets. So this is another asset that is put in the hands of industry. And one of the most exciting projects that is really addressing something incredibly hard to frame is, can we make IP out of artificial intelligence? And we started this project in 2022 and it had a fantastic year 23 because we then organized workshops in conjunction and with the support of other research institutes like Max Planck in Germany and also at Duke University in Washington DC. And it’s really addressing contract law. And contract law is very hard because the way contracts are drafted and drafted is an art and they have to have you know the proper address in the proper language and really provide guarantees. So when it comes to intellectual property the contract laws are expanding and I invite you to follow this project because in 2024 we will create an incubator. So if you work in IP law and your focus is artificial intelligence please contact us because we will be running this incubator in 24. And the next steps for other projects, for example this is the one that I lead because this is the one I created when I joined, really encompass all of the nations. Agriculture is one of the pillars of our civilization. Artificial intelligence is creating prosperity and new ways to ensure that arable land doesn’t decrease and water resources are preserved and that we really can feed 10 billion people in sustainable ways. And regulating AI as well. You know what is the landscape of AI regulation across the board? How each nation is dealing with their own AI regulation? Can we find standards? This is another exciting project. As well as the future of work. The future of work is very vital because there is much misunderstanding as to what AI brings to industry and society and there’s much fear as to perhaps being relegated to a secondary role as humanity and as workers. And this is one of their working groups that has the most activity. They have projects for 2024 in which they undergo projects with university students in South America. They will see the impact of generative AI in Espanol and they will try to get down to areas where we can learn from countries in development. As well as how the working conditions of employees and workers are changing with the rise of AI within their companies. This is just a picture. I hope that you can visit our website and get familiar with the rest of our projects. And like I said before we are a growing organization and the strength of the collective is because of what the individuals bring to us. And now it is my great pleasure to introduce to you our future presidency lead in JPEG which is the government of India. And for that we have with us our colleague in the steering committee Mr. Abhishek Singh. I believe he’s online from India. The floor is yours. Oh yes he’s there. I think I need help from the AV team in managing. I should not be the big screen now. Can you please give the floor to Mr. Abhishek who is online. I can see him and disconnect me. He’s on mute. Okay. I think Abhishek you are now able to speak. A little bit. Abhishek could you speak louder. We don’t hear much. No nothing comes through. Okay. How about now. Now you are. are. Thank you so much and for your patience. Thank you. Thank you. Thank you. I don’t know

Abhishek Singh:
what the glitch was, but anyways, but what I wanted to say is my colleague, Kavita Bhatia, is here. She will be making a presentation with regard to the summit that we’ll be hosting in December. So Kavita, can you share the slides and make the presentation? Yeah. Good afternoon,

Kavita Bhatia:
good morning, and good evening to all of you. I’ll just share my screen. Is the screen visible? Is the screen visible? Yes, yes. Yes, Kavita, it’s visible and you could switch on your camera also maybe. I don’t think I’ll be able to because this running from the one to… I’ll do it afterwards. So India’s vision for AI has been very important is that we understand that this technology brings a lot of focuses on emerging technologies, but we are wanting to ensure that the technology will bring social and economic growth, which will be in line with the inclusive development. In fact, our Honorable Prime Minister has been always saying that we need AI, we need to make AI in India and make AI work for India. And he also believes that the technology should be rooted with the principles of sabka saath sabka vikas, which means that benefit for everyone and working with everyone. So this is the basic vision of India for AI. In fact, we have a very simple approach because you all know that India is a very large country and we know that AI has the potential to improve the public service delivery by overcoming the administrative efficiency, by bringing more efficiency in the administrative procedures and keeping citizen in the focal point so that the services which we develop with the help of AI should be beneficial to the citizen. And also AI needs to overcome the traditional barrier of not having inclusivity. So we want to have it more inclusive for the development of large scale social transformation solutions. As I said that AI enables the policy developers so that they are able to take the right decision based on the data. So that the decisions taken for the development of social benefits should be meant for the citizen and he should be able to get whatever the benefit he’s entitled to. In fact, this is not what AI should stop at. It should also empower the citizen so that he knows what is his entitlement and what are the benefits which he’s supposed to get. And he should approach the government so that he’s not debarred or not having these benefits which he’s supposed to get. And the AI provides the innovative models for the governance so that we can have innovation for public’s good and create new economic opportunities. So this is the main focus and the approach which India is taking forward. And we already have come up with a strategy for AI which basically focuses on democratizing the access of AI resources. The AI resources which we mean is the streamlined access to good quality data sets for research and innovation, having the access to the compute and most important, having the skilled resources so that the innovation can be brought out into the system. So with this principles in our background, we have come up with a comprehensive program on AI which focuses on these three pillars as well as one of the most important pillar which we have kept in this program is the National Center on AI where we plan to implement 10 solutions across the country so that we can see the benefit what AI brings for the nation. Responsible AI is also one of the pillar as I said, which is very important because the solution should not cause harm to the human being. So we have already detailed out the principles of responsible AI, we have worked on the operationalization mechanism, and we have also gone ahead with one of the use case of using these principles in the facial recognition technology. As I said that India is a very large country, we have 22 languages which we speak and more, around 1000 plus dialects. So we understand that AI should bring the inclusivity and should also enable the citizens to get the services in the vernacular language. So we have already created a platform which is a multi-modal AI platform called Bhashani which is built to speech-to-speech machine translation system. In fact, in G20 we have showcased this solution and we have also added 10 international languages which has been showcased in G20. The other important aspect which we have worked on is the framework for the fairness assessment. Because the solutions which are brought to the market or which are taken for the implementation should be made sure that they are fair and should not have any biases attached to it. So we have come up with the framework and along with BIS we are also working on the other standards which are very important for developing a successful AI solution. Skilling, as I said that we have already made a note of it and we understand skilling is very important. So we are trying to cater to the skilling aspects at all the levels. One, at the very initial level when a child is in the school, we wanted to make him understand about what AI is so that we can demystify the harm which he has been told that the AI can bring. The second level is that we want to re-skill and up-skill our IT professionals so that they are up to the mark in the era of AI. So that they are able to tackle the problems which AI might bring about the job. losses. And the third area which we have tackled is the researchers. We understand that we need to have more researchers so that we can develop our own LLMs. So we have come up with a program where we are financially supporting the PhD students in the area of AI and emerging. So we are trying to cater to the skilling aspects in all the three levels so that not a single layer is being left out. With regard to the principles, with regard to the vision of AI, this year we are going to take the lead chair in December. However, we have started working for GPI as an incoming chair and we will be hosting our annual summit in December which has been talked about by Inma as well as Ellen. That December we are going to have a summit where we will make global policy makers to come and have more discussions on the responsible AI. So our main focus as an incoming chair this year has been to increase the member and the expert collaboration and in this regard we have already had convenings on three of the working groups which Inma has shown, data governance, innovation and commercialization, responsible AI, where we have brought the ecosystem of AI to understand what the GPI experts have been doing and we also wanted the industry to come and share their experiences and vision which they want from the GPI. So this we have already done and fourth convening we’ll be having shortly on future of work. The most important thing which we plan to do it in our presidency is that we wanted to make GPI as an independent identity, as a multilateral initiative so that the GPI can be the point of contact for all the AI related information or standards or frameworks as what WHO has been in. case of health. This is what we want to do in our presidency. And we also want to enhance our advocacy efforts so that we will bring more visibility of GPI outputs and also we would like the GPI work which they have done so that we can proliferate and adopt those work which the GPI has been doing in the last four years. And also one thing which we want to increase participating but we wanted to enhance this membership so that we can bring wide variety of experts which have a different national and regional views and experience so that we have a holistic view of AI in the world across. And last but not the least we want to promote the equitable access to the critical AI research and innovation which is compute, data, algorithms, software, experts and other related resources to the countries which don’t have the access to so that we have this equitable AI research and innovation access to everyone. So this is our vision for our presidency and we will be hosting a summit in December in India in New Delhi. With this I would like to thank all of you and will come out of my presentation. Thank you.

Abhishek Singh:
Thank you so much. Thank you Kavita. Thank you Kavita. Just a minute Inma. Thank you Kavita for laying down the vision and the plan that we have for the GPI summit. We are looking forward to hosting all of you in Delhi from the 12th to the 14th of December and as Kavita mentioned like apart from the focusing on the themes for the GPI and the working groups and getting all the stakeholders all the experts coming here to join it. We’re also having a few other add-ons to the summit in which like there will be AI expo in which we are getting startups from all across the world to come and show their AI based solutions that they are there. We’ll be having an AI game changers that we have launched. We have shared the information with all of you. are startups who are building any solutions related to AI, any dimension of AI. If they want to participate in that, the last date of it has been extended from 15th of October to 15th of November. We would like you to share it with the relevant stakeholder community with regard to the AI game changes. And we’d also have a lot of side events which will focus on various themes of AI. So if there are any member countries or any of the stakeholders who are represented at the IGF would want to contribute to the discussions in the side events at the GPA summit, we look forward to hearing from you, look forward to getting your involvement, your participation. Because the way the summit is being planned in India is that we want to make it like the go-to, just like the Internet Governance Forum is the go-to place for issues related to internet governance that we all do. And we all look forward to such events which are held annually. Similarly, the GPA as the prime body with regard to artificial intelligence will bring together all nations, all stakeholders, civil society, non-government organizations, industry, academia to this partnership. The GPA needs to evolve into that. We are working towards that. And we are also working with the Secretariat and member countries with regard to how the GPA will be expanded. So look forward to getting all your views and getting your participation in the GPA summit in December.

Inma Martinez:
Thank you so much, Abhishek and Kavita. Thank you so much for showing us what is to come in 24. I’d like to move to the discussion now. And I want to remind our audience that we will have 15 minutes of Q&A that you can use the chat or you can make the questions here in the room in person. But first, I’d like to start with Joichi Iida. You always talk about strengthening GPA. Can you give us some highlights as to why you firmly believe in that and what it is to come in that respect?

Yoichi Iida:
OK, thank you very much for the question. And yes, we strongly believe that the strength and uniqueness of GPA exist in the expert level. structure and multi-stakeholderism is the center of the value. So ideally the government and other stakeholders should support the private sector experts who are working in the project-based activities through working group works and their work is now supported by two expert support centers located in Montreal and in Paris. So in order to strengthen the GPA’s value and function we would like to strengthen the support system to expert activities and that is why we are proposing to add expert support center in Tokyo. So we have proposed this proposal to establish and add new expert support center in Tokyo at executive council this year and we believe the proposal was welcomed generally and of course we need to go through the discussion at steering committee and ministerial council but once it is approved we would like to operationalize the concept of third expert support center. In order to do that we at the same time need to prepare on our side to bring some financial and personal resources to manage the center and we are now internally discussing across the government how we could do that. Of course we need as government to bring this in action. And that is one of the ideas we want to realize, and also we are trying to promote the visibility and awareness among people on GPAE’s activity, and we hope this will be promoted through the closer collaboration between GPAE and the Hiroshima AI process, where we are discussing how we could promote and materialize project-based activities to accumulate some evidences on what kind of measures and practices might work to address some of the potential risks and challenges brought by generative AI and the foundation models. And also we may do some project to understand better how we could responsibly deploy generative AI and the foundation model into the society. And these topics and projects can be implemented through the newly established expert support center in Tokyo. That is one of the ideas we are now trying to promote, and I hope this will contribute to the higher scaling up of GPAE function. Thank you very much.

Inma Martinez:
Thank you. Thank you so much, Joichi. I’m really delighted to hear this because, as I mentioned earlier, the strength of GPAE resides in the multi-stakeholder equity and the decentralized and federated governance. And nothing would delight me more than having an expert group in Tokyo. I’d like to ask something to Abhishek Singh. that is also super exciting because we are all looking forward to having India in the president’s seat, in the lead seat. And I would like to ask him, every time that we have met and you have come over to the working groups and you have looked at the projects, what do you feel is probably the significant difference between our projects and your agenda as a CEO of the digital agency in India, under a very visionary mandate from your president?

Abhishek Singh:
Thank you, thank you Inma. I must straightaway mention that one key value that we get as being part of the GPA and getting to interact with the multi-stakeholder group, the Center for Excellence at Montreal and at Paris and interacting with experts who are all working on various projects, the four working groups that we have, is that we get a lot of insight with regard to what more can be done. As you rightly mentioned, India has been the leading country when it comes to implementation of digital public infrastructure projects. We’ve implemented certain digital India, as we call it, the digital India as a brand, but the digital project will be implemented at population scale, whether it’s the identity platform which has got more than a billion people registered, we have almost 70 billion. Authentication that happen on a daily basis or our digital payments ecosystem which is like one of the world’s most robust and most largest digital payments platform with more than 10 billion transactions that happen every month. So the numbers are huge or our document exchange platform which we call DigiLocker, which has more than 200 billion registered users. So in India, whatever we do is at a scale. But now as we move on and try to leverage artificial intelligence, we are seeing that a lot of value addition that happens when you kind of bring in a layer of AI to the digital transformation project that we have implemented. And when we do that, when we say, for example, use face recognition for authenticating people. We are using a very simple AI tool. But then all the issues related to ethics and responsible AI come in. When it leads to more and more adoption of AI, the future of work comes in as a very important thing. We have a large number of population, almost 50 million Indians are working in the IT and ITES sector. But the way AI is coming in, some of these jobs will be impacted. So we need to work with the global community. We need to work with the experts. We need to work with other nationalities with regard to especially coming up with framework, the guidelines for regulating AI, ensuring that innovation and regulation go hand in hand, ensuring that we are able to provide equitable access to AI, ensuring that we are able to democratize AI, and most importantly, bring in an era of explainable AI. So very often, these things cannot be done by only one nation alone or a few corporations alone. There is a significant concentration of the AI technology in a few companies and a few nations. But if we need to harness the benefits of this technology, we need to take it further. We need to ensure that there is access to compute, there are frameworks with regard to data governance, the frameworks with regard to leveraging the data that can be used for building AI models. And there are solutions that can be used for population scale societal problems. Like for example, how do we build AI solutions for solving healthcare issues? How do we use AI to detect tuberculosis or cancer? How do we use AI for helping farmers across the country? And when we do that, the real benefits of AI will come in. And the low and medium income countries are going to benefit a lot in doing that. So what we are doing is that integrating the artificial intelligence and advancement of field of AI with the digital public infrastructure project that we have implemented and work with the global community in order to fast forward that. And whatever we have learned, whatever we have built, or whatever we are building together, it becomes part of the global DPI repository. As the G20 declaration mentioned about building a global DPI repository, the AI solutions will also become part of the global. repository and many of these solutions that will be developed in world in cooperation with other member countries will become available for adoption and replication across the world. So that’s the value we see as being part of the GPA and the benefits that we get in engaging with the real experts, real technologists, real engineers and real social scientists who are working in this field.

Inma Martinez:
Thank you. Thank you so much. I completely second everything that you said and something very important that both of you have mentioned and especially in the themes of each presidency, resilient society, making empowering people to respond to challenges and then making AI equitable and accessible to all. This is the century of human centricity, putting people at the center of everything that we create so that we can really create a future for everyone. I’d like to open the floor to anyone online or present in the room to have the chance and the great opportunity to make any questions to Joichi and Abhishek or myself. So if we have any questions, please raise your hand and somebody will bring you a mic. Or let me just check anything happening online. Yes, we have one question over there. Can I? I’ll give you the mic myself. Please let us know who you are and your organization. There is switch. Great. How’s that? Perfect. Okay.

Audience:
So first I just want to say thank you so much for the great presentation. I found it really helpful. My name is Ed Teller. I’m from Amazon Web Services from the global AI policy team. I thought the slides were all great and the one which I thought was particularly interesting actually was seeing how GPA’s work streams were being mapped against the Hiroshima commitment. So I thought actually I just wanted to ask a couple of questions about that sort of firstly sort of how you see that work sort of going forward. Because you’re likely going to see reporting against those, the Hiroshima principles. So I think that’s a really helpful lens to understand the work. And also just in terms of like how you’re thinking about partnerships as well and collaboration against those work streams too. So yeah, those might be questions, but thanks very much for the really helpful presentation. Thank you.

Inma Martinez:
We have an ongoing interaction with the council. At the inception, the member countries exposed their concerns, the challenges that they felt that they needed to address. And then we looked at those concerns and challenges and from our perspective of engaged scientists, because we all work back home at our universities, labs, industry. And we decided, okay, this is how we would address that issue. And we would propose… projects and things to develop and then the council funds it and we set ourselves to deliver the specific initiative, the specific projects. Some of them are ongoing, some others were completed within a period of two years and all of this is shared back to the member countries and also publicly because all of these reports as to how we are progressing are hang on our website and our approach to the projects is we cannot be the only ones working on these. So as Yoichi rightly said, the strength of JPEI is that once we get our mandate, we look at the world and we ask ourselves, is there an expert somewhere that we should invite to work with us in this project and those are the specialists. So for example in my agriculture project, immediately I reached out to NARO in Japan who is the agency that looks at all the technification of agriculture and the director-general himself is my specialist and he comes to the meetings every two weeks and he has brought information, insights, thoughts, strategies and that’s how we work and this is how the community really brings real insights because it comes directly from the places where AI is being created and we are expanding and now we’re moving into IP law and we move to civil society and ethical societies and companies that think about service-led innovation when it comes to putting products in the world. What are the principles that you are guided by when you create a product to be safe and that’s really the uniqueness of this initiative which truly is unique because there’s nothing like it and we hope to strengthen it with the support of our member countries and their leadership and their own research institutes and innovation centers. That’s why it’s decentralized and federated. Thanks so much for the question. Thank you. We have another one here. Let me just pass you the mic.

Audience:
Good afternoon. Good morning. My name is Paola Galvez. I’m Peruvian, right now based in Paris. I just finished my graduate internship at the OECD and my master’s in Oxford. Former advisor to the Secretary of Data Transformation in Peru where I helped over, actually I oversaw the design of the AI national strategy and I have one question because personally I’m a gender advocate in whatever I do. So I saw in the chapter or initiative that you have in Responsible AI there is an activity number five creating diversity and gender equality in AI. If you please could explain or expand a bit on it. Also ask India the future. chairs, because I saw their principles and it was really fantastic to hear their optimism and how they want to position GPA as the institution or multilateral initiative to really come to when we need expertise. And if you will be open to develop an initiative that works in bridging the gender gap in AI that we have in the world. And my second question would be how, coming from Peru, I am happy to see many developing countries, but maybe what are the requirements to have other countries as members? Speaking of Peru, that we are the first one to have a law on AI for social purposes. I think that would be something that our prime minister would be interested to, but I would like to know what are the requirements so that I can come to them and tell them how incredible initiatives you guys have. Thank you.

Inma Martinez:
So the gender question is for me or for Abhishek?

Abhishek Singh:
I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, thank you. In fact, I really thank, I didn’t get your name, the lady from Peru for the question, but a very rightful and very impactful issue because gender has been a issue, especially with regard to AI algorithms and the biases in AI, whichever way we work in. And that’s primarily because of the bias that exists in the input data. And if the data is not equitable, in the sense that if data is biased, like for example, very typical examples are given that AI models would treat engineers to be men and teachers to be women. So these biases, if we are aware of, that can be resolved to some extent. This is a very basic level. The other biases that exist when it comes to gender and AI. like very often in India, what we’ve seen is that even in AI scaling, the number of people who undergo AI scaling and AI training, very often because of societal biases, it’s the men and the boys who take the dominant share of because of access to devices, because they have access to higher education. So all those biases come in. So we have a conscious plan within India that whenever we do talk of digital training and digital scaling, we try to balance it out and we try to have proactive measures in order to encourage more women to take up courses and take up AI-based scaling, so that ensures that we have a fair balance of that. Whichever data sets are used, how do we mitigate the biases which are there in order to ensure that AI is gender neutral and AI is more equitable for the whole population? So those are some measures which are taken, but yes, it will take a lot of time to train the models in order to ensure that they’re aware of the biases and how to get rid of them. And that’s the work that’s part of the ethical framework and the responsible assessments of AI solutions, wherein we address the biases coming in for gender or biases coming for race, or biases coming for other diversity that exists in the world. So that’s on the gender issue. And the collaborative part is, again, something which is very, very useful. And one of the themes which we have introduced for the 2024 presidency is with regard to collaborative AI for building partnerships amongst various stakeholders. Like, how do we join hands? How do we share knowledge, share experience, share models, and together work on building big solutions? In fact, Alan from the Secretariat very often talks about building a CERN for AI. Just like world community has come together to work for particle physics and advancement in the CERN Center, can we think of having a shared compute infrastructure? Can we think of having shared datasets on which research could be done? Can we think of sharing insights or sharing partnerships between AI research? from across institutions. So that would really, really ensure that we work together collaboratively in the field of AI and develop AI for the betterment of humanity, rather than just being always being aware of the biases or the bad things that can come from artificial intelligence.

Inma Martinez:
Thank you. I feel the second question about further countries joining. Right, maybe. Maybe Alan. Okay. Hello.

Alan Paic:
Yes, it was not about further countries joining. Well, I can also mention that. So we do have a membership process, which is well-defined and also described on the website. Further countries are invited to apply. Right now, the intake for 23 has been closed. And there will be a next opening in 24. So watch the GPI website. Everything is explained there. The deadline is around June or July, and countries are expected to present a letter of intent with their motivation to join GPI as an initiative committed to trustworthy AI. So that is the possibility for membership. And I would also like to react to what Mr. Singh just mentioned. Yes, we have been talking about the future perspectives of GPI. You know, GPI has achieved a very significant impact and Inma has mentioned previously, for instance, the detection mechanisms, the obligation by, you know, companies putting on the market foundation models to actually provide the detection mechanisms, which would allow us to understand that something has been produced by that foundation model. That is something which is very significant and has already been taken on board in some countries’ legislations. And I think GPI going forward wants to provide more and more impact. And I’m really happy to hear that India has this vision for the new year and new presidency to lead the way of GPI towards, you know, pulling together the resources. Basically, the idea comes from the understanding that today a lot of the R&D is concentrated in the hands of a few. large private companies, and that the public sector research is far behind and has a very limited understanding of the new technological advances, and the spending from the public sector is fragmented among different countries. So the idea, as Mr. Singh mentioned, of GPI as a go-to place, as a place where all the countries come together and pool together their resources in data and in compute power, and also with perhaps some other international networks which already exist, such as the worldwide LHC computing grid for particle physics, this could bring GPI even further on this ambition. So we do, of course, want to partner with private companies, that is great, but we do also want to bring together public research institutions, and this is the model of the national AI institutes which I mentioned in my introductory speech. Thank you.

Audience:
Thank you so much. I think we have a question here. Thank you very much. My name is Juraj Corba, I’m representing the government of Slovakia from the EU. I have two questions, if I may. In your presentation, which was very helpful and very informative, thank you for that, you have mentioned that you are planning some kind of activities for the support of regulatory activities. Could you please be more specific? What are you planning? So that’s the first question. And the second question relates to the India summit, so it may be a question to our friends from India. My question would be, you have mentioned that you are planning to create or manage the summit in a way that it is the place where we go and come for the AI topic. Slovakia is a non-member of GPI, we are considering a possible membership, but my question is how open will you be to participation of non-member countries, and how they can effectively participate at the India summit? Thank you very much.

Inma Martinez:
So I will answer the first question, and I believe the second is for Mr. Singh. So these workshops, what I mentioned regarding the contract laws, regarding AIIP, are planned to take place in incubator style, and if you go online and you look for this project, you’ll see the person that is leading this, which is Lee Tidrich, which is a professor at Duke University, and she will then be able to share the schedules, because as I mentioned, the projects seek specialists, and the specialists are invited to work with us from any country. So if there are projects in which you feel that some of your AI scientists and practitioners would really like to participate, we would like to invite them. to contribute, the projects are open for collaboration. And then I believe the second question was for India, for Mr. Singh, for Abhishek. Can government delegations attend the summit in India?

Abhishek Singh:
Yeah, thank you, thank you. I got the question, sir. So thank you for the questions, Slovakia, and for the interest in the summit. So while the way the GEP is constituted, the ministerial and the various official engagements which will happen on the 13th and the 14th will only be open to the existing member countries. And of course, the membership application, I don’t know how quickly we can work about approving the membership, but yes, there are the side events on the 12th, and there will be an exhibition which will be open to all guests, all members. So if you’re wanting to come and visit, you can write to us and we’ll work out the modalities and the sessions in which you’ll be able to participate as a non-member. So that we are willing to look into it, and we would also be showcasing, there’s a lot of interest in our prime minister himself said that if there are other countries who are coming, then we can showcase them our digital public infrastructure projects, how other parts work, how we have done in AI, and the side events and all in which we could participate. But yes, the ministerial and the steering committee and the executive committee, those formal events of GPA, they will be open only for the member countries.

Inma Martinez:
Okay, any other questions? Yeah, we have.

Audience:
It’s just a continuation to his second question. And I just introduced myself. I’m Kamesh Shekhar, I’m from India. And yeah, it’s a proud moment for us that we will be having the summit very soon. I’m from the think tank called the dialogue. We are based out of Delhi. And I just had a very follow up question to his is that like, and so it had actually mentioned. that there will be side events and other opportunities. I just wanted to understand like as a think tank and as a civil society, how can we also take part in the summit and like what are the way forward that we could watch out for as the summit like comes into picture. So that’s just a question I had, yeah.

Abhishek Singh:
So the details of side events will be up on the website very soon, hopefully by next week or so. And we would welcome registrations for the side events from non-members also. So, and there are a lot of think tanks who are already wanting to take part. There’s a lot of interest from the industry. There’s a lot of interest from the startup community. Within India, we had a big meeting yesterday in which more than 50 people had participated and they all have given various ideas with regard to what all we should be covering, especially with regard to building consensus with regard to key issues that the world faces, especially for the advancement of artificial intelligence and other technologies. So we look forward for that. We have been getting very good response from all stakeholders, especially the G20 countries and the countries beyond for our initiatives. In fact, I would like to just mention that even in the G20, as part of the Digital Economy Working Group, we, based on the requests that we got from multiple countries, we hosted a global DPI summit in Pune in June. And that included a lot of countries outside the G20. So almost 50 countries took part who were not the members of G20 because they wanted to know about what all we have been doing in the digital space. And eight of those countries already signed MOUs with us for replication of some of the India-stacked solutions in their countries. And none of them were the G20 members. So similar approach will be here. So the official meetings would be open only for the members, but the non-official, the exhibition, the side events, the keynote talks, we are trying to get some keynote AI scientists and researchers who can come and deliver a keynote talk. So those will be available for people who are not officially a member of GP also.

Inma Martinez:
Okay, and we have a question right next to you.

Audience:
Thanks very much. My name is Ben-Gurion Jones, I advise the Western Institute Foundation for Democracy and the body on issues around technology and elections. So I’m working on 10, 15 elections. I think the mic is not on. The mic is not working, is it? Can’t hear. There we go. I’ll start again. My name is Ben Graham-Jones. I advise the Westminster Foundation for Democracy, a UK public body around democracy and elections, but especially on issues pertaining to technology. I understand that you’re a big principle of GPA as well as that it’s very much guided by the shared democratic principles of its members and I’d be keen to know you’re moving forward into the Indian presidency, whether there’s also plans to address new issues around AI in the election processes, both the challenges and the opportunities moving into the year ahead.

Inma Martinez:
Well, from the MEXT perspective, we know that one of the major bruising points of generative AI and AI that has been misused is the risks to democratic countries, the risks to the democracy in the world. So these cascades into various projects, not just the one, because we believe that the pillars of the world is democracy, a welfare society that looks after its people and ensures their well-being. So if you want me to look through all the other projects after the meeting and tell you which ones, the theme is running across various projects from responsible AI to data governance. For example, data governance in 2022 had a specific project on human rights and that obviously comes from, you know, non-democratic situations. So I can talk to you after the session. Are there any other questions? I think we have reached minus one minute and I would like to thank Joichi Iida and Mr. Abhishek Singh and Kavita Bhatia and our colleague in Paris at the OECD, Alan Paik, for convening and being with us and presenting our vision, our hopes for the future and the singularity of this initiative, which many times when people ask me what the GPA is, I always say when you wonder if governments care about people, this is one of those. They truly do, they truly do, and they do their best to really take the reins of our future and make sure that AI is for opportunity and for good and for welfare. And let’s hope that we can achieve that. Thank you all for coming to this session. Many thanks to those who connected online, and I declare the session finished. Thanks very much for your questions. Thank you, thank you very much. Thank you.

Abhishek Singh

Speech speed

213 words per minute

Speech length

2545 words

Speech time

717 secs

Alan Paic

Speech speed

132 words per minute

Speech length

1792 words

Speech time

815 secs

Audience

Speech speed

169 words per minute

Speech length

967 words

Speech time

344 secs

Inma Martinez

Speech speed

137 words per minute

Speech length

4554 words

Speech time

2001 secs

Kavita Bhatia

Speech speed

167 words per minute

Speech length

1520 words

Speech time

548 secs

Yoichi Iida

Speech speed

95 words per minute

Speech length

1199 words

Speech time

761 secs

Fake or advert: between disinformation and digital marketing | IGF 2023 Networking Session #171

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Heloisa Massaro

The commercial marketing industry has always been a significant source of funding for newspapers and has a critical influence in shaping the information environment. Understanding how programmatic ads work and how they finance online ad campaigns is crucial for making informed choices and structures for online advertisements.

In Brazil, workshops were conducted with digital marketing actors, highlighting the necessity of integrating robust risk analysis into marketing and advertising content creation. This is aimed at tackling the risks associated with disinformation and hate speech. By embedding risk analysis, marketing campaigns and advertisements can be developed with the necessary precautions to counteract disinformation.

Heloisa Massaro advocates for the development of best practices and guidelines for the advertising industry to mitigate potential negative effects on the information environment. The Internet Lab conducted a project called “Desinfo,” initiating a dialogue on best practices and guidelines in the advertising industry.

The influence of digital influencers in politics is seen as a problem due to the difficulty in separating their work from their political marketing roles. This raises concerns about the transparency and credibility of the information disseminated by digital influencers.

Self-regulatory bodies play a crucial role in addressing disinformation in advertising. Discussions are held regarding measures to mitigate risks through self-regulation, promoting responsible advertising practices.

Exploring regulatory approaches is also important in handling disinformation in advertising. Mention is made of a platform regulatory build that tackles fake news in Brazil. These regulatory approaches aim to create a more accountable and transparent environment in the advertising industry.

To summarize, the commercial marketing industry significantly influences the information landscape. Understanding programmatic ads, integrating risk analysis, and developing best practices and guidelines are essential in addressing disinformation and ensuring responsible advertising practices. It is important to address the influence of digital influencers and explore regulatory approaches to mitigate potential negative effects on the information environment.

Audience

Political advertising plays a significant role in modern political systems, but it is a complex and problematic issue. This form of advertising has the potential to be weaponised and has frequently been used for data targeting, as highlighted by the Cambridge Analytica scandal. The misuse of data for political purposes poses a serious challenge to the integrity of elections and democratic processes.

It is argued that the role of political advertising needs better management and interventions to address these challenges. Election observation groups, such as the National Democratic Institute (NDI), engage in monitoring political advertising to ensure transparency and fairness. However, the Cambridge Analytica incident has underscored the need for stronger measures to regulate the use of data in political campaigns.

The involvement of digital influencers in political advertising further complicates the situation. There is a difficulty in distinguishing their actions as independent content creators from their role as political marketers. This blurring of lines makes it challenging to discern the extent of influence they have over public opinion and the potential impact on political campaigns.

To mitigate the risks associated with political advertising, it is argued that regulation should be developed to observe how advertisements contribute to disinformation in political campaigns. The dissemination of false or misleading information poses a serious threat to the integrity of elections and public trust. The difficulty lies in distinguishing between political content and other types of content circulating on the internet, which requires careful monitoring and regulation.

In Brazil, there is a self-regulating council for government advertisements. This council, overseen by the Brazilian Internet Steering Committee and advised by technical expert Juliana, aims to ensure that government advertisements adhere to ethical and legal standards. While the self-regulatory framework is in place, it is important to consider how measures to mitigate risks can interact with this framework and state regulations. The potential for regulatory capture within self-regulating councils and other complexities must be acknowledged and carefully addressed.

In conclusion, the role of political advertising in modern political systems necessitates better management and intervention. The weaponisation of political advertising, data targeting, challenges related to digital influencers, and the dissemination of disinformation all underscore the need for regulation and monitoring. As seen in Brazil, self-regulatory councils can play a role in ensuring ethical advertising practices, but it is crucial to consider the interactions between mitigation measures, self-regulatory frameworks, and state regulations. By addressing these concerns, steps can be taken towards fostering fair and transparent political campaigns and preserving the integrity of democratic processes.

Eliana Quiroz

An analysis of the role of marketing companies in the disinformation ecosystem reveals various perspectives. One viewpoint asserts that marketing companies are integral to the spread of disinformation. They excel in providing marketing strategies and facilitating effective micro-targeting, enabling the dissemination of misleading information. This complex ecosystem is formed by the involvement of multiple private companies in digital marketing and disinformation.

Contrarily, another perspective argues that the distinction between companies offering marketing services is blurred. This lack of clarity makes it challenging to define individual responsibilities in the disinformation ecosystem. For instance, Meta, a digital platform, provides marketing advice and services to influential clients, while newspaper companies in Peru act as intermediaries. This emphasises the need for a comprehensive understanding of the different actors involved to effectively combat disinformation.

The analysis also notes the impact of the Cambridge Analytica model on digital marketing and disinformation companies. This model, involving detailed data analysis and targeting strategies, serves as a reference for manipulating public opinion. However, its full implementation requires sufficient resources and interest. In cases of limited time or money, certain elements of the model may be utilised.

Having an understanding of country-specific marketing services is essential in addressing disinformation effectively. The analysis highlights the wide range of marketing services available in the global South, reflecting diverse resources. Additionally, journalists and influencers can play significant roles in the disinformation ecosystem. Therefore, a tailored approach is necessary to combat disinformation successfully.

Shifting focus to political advertising, the analysis underscores the importance of identifying the various actors involved to ensure transparency. The entities involved in political advertising include marketing companies, influencers, data providers, data analysts, media production companies, digital communication and public relations firms, and fact-checking and public opinion companies. A thorough understanding of this ecosystem is crucial for promoting transparency in political campaigns.

Regulation is suggested as a solution for promoting transparency and protecting human rights in political advertising. However, striking the right balance with freedom of expression is essential. It is recommended that regulation extend beyond digital platforms to include companies engaged in political advertising.

Lastly, the analysis highlights the significance of inclusivity and raising awareness of human rights frameworks among companies involved in political advertising. Some companies may not fully comprehend their role within the context of human rights. By fostering inclusion and promoting awareness, ethical implications associated with political advertising can be addressed.

In conclusion, a comprehensive understanding of the role of marketing companies in the disinformation ecosystem is crucial. The blurred boundaries between companies and the influence of models like Cambridge Analytica must be acknowledged. Tailored approaches, regulation, and a focus on human rights and inclusion are necessary to effectively combat disinformation and promote transparency in political advertising.

Anna Kompanek

The analysis explores the important role of the private sector, particularly local businesses, in addressing the issue of disinformation. It suggests that the definition of the private sector should be expanded beyond just big tech companies to include local business communities. These communities are both contributors to and victims of disinformation, making it crucial to involve them in tackling this problem.

The analysis highlights the need to sensitize companies about the potential ramifications of their advertising placements. It points out that companies may indirectly support disinformation through their advertising spending, with ads appearing on disreputable websites associated with disinformation. Therefore, companies must go beyond simply reaching audiences and consider the potential negative consequences of their ad placements.

The business community is seen as a key player in improving information spaces and combating disinformation. It is noted that a growing segment of companies is recognizing the dangers posed by disinformation. These companies can support independent journalism through ethical advertising and other means. By investing in healthier information spaces, businesses can contribute to creating a diverse and reliable range of information for the public.

The analysis underscores the need for global support and responsible business practices to foster healthier information spaces. The report by the Center for International Private Enterprise (CIPE) and the Center for International Media Assistance (CIMA) emphasizes ethical advertising as one way to support independent journalism. It suggests that responsible businesses have the power to promote and maintain healthy information spaces through their practices and collaborations.

Independent journalism is emphasized as being vital in combating disinformation. It is recognized for providing a diverse range of information to the public, countering the spread of false or misleading information. This underlines the importance of supporting independent journalism in efforts to tackle disinformation.

Furthermore, the analysis notes that local businesses can play a significant role in investing in healthy information insights and independent journalism. They can contribute through various strategies, such as ethical advertising, impact investment, blended finance, corporate philanthropy, and corporate social responsibility (CSR) initiatives. These initiatives enable local businesses to have a positive impact on information spaces and support the work of independent journalists.

Collaboration between government, civil society, and the private sector is identified as essential in addressing disinformation effectively. It is noted that the biggest danger lies in governments passing laws without consulting civil society and local private sector representatives. On the other hand, collaboration and dialogue can lead to more informed policies and effective measures against disinformation.

A noteworthy observation is the value of bringing local business organizations together as part of broader coalitions to secure the information space. In the Philippines, for example, the collaboration between the Philippine Association of National Advertising and the Makati Business Club was instrumental in discussing and addressing issues related to information security. By uniting local business organizations, effective measures can be taken to safeguard information spaces and combat disinformation.

In conclusion, the analysis underscores the crucial role of the private sector, particularly local businesses, in addressing disinformation. It promotes the inclusion of local businesses in efforts to combat disinformation and emphasizes the need for responsible advertising practices and support for independent journalism. Collaboration between government, civil society, and the private sector is crucial, and local business organizations can contribute to securing information spaces through broader coalitions. By working together, these stakeholders can foster healthier information environments and mitigate the negative impacts of disinformation.

Herman Wasserman

Disinformation has been a longstanding issue in the global south, with its roots tracing back to colonial periods. During this time, various forms of communication and propaganda were used to justify the subjugation of the colonised. In the post-colonial era, states in the global south have continued to control the media and engage in disinformation campaigns, aimed at limiting critical voices and maintaining their power.

The scholarly production around disinformation reached its peak in 2016, following elections in the United States, bringing increased attention to the issue. The advancement of new technologies has further amplified existing trends and forms of disinformation, posing a significant challenge to the global south.

The global south faces a dual threat to its information landscape, both externally and internally. Foreign influence operations draw on historical loyalties and presences in the region, while repressive states exploit the fight against “fake news” to enact laws that effectively criminalise dissent and restrict freedom of expression.

Another factor contributing to the proliferation of misinformation in the global south is misleading advertising and sensationalist journalism. These practices can promote false information and pose a challenge to the sustainability of small, independent media outlets which often rely on advertising for financial support. Economic downturns, in particular, can lead to cutbacks on advertising, further threatening the viability of local news outlets.

Despite these challenges, citizens in the global south are actively combating disinformation through various strategies. These strategies are often intertwined with other struggles, such as those for internet access, digital rights, media freedom and education. It is crucial to acknowledge the agency of individuals in the global south in the fight against disinformation.

In terms of political advertising regulations, South Africa currently faces a disconnect between the outdated regulations and the current social media climate. Regulations primarily focus on traditional broadcast channels and newspapers, failing to address the unconventional methods employed by political parties in the digital realm. As a result, there is a need to update and adapt regulations to match the evolving landscape of political advertising.

While formal regulation is an important aspect of controlling political advertisements, it is insufficient on its own. Public awareness and understanding of political communication play a pivotal role, along with fact-checking as a crucial part of political discourse. A coalition of journalists and civil society organisations is necessary to scrutinise political parties’ claims and ensure accuracy and transparency.

In conclusion, the issue of disinformation in the global south is multifaceted and complex. It stems from historical contexts and continues to be perpetuated by external influences and domestic repression. Misleading advertising and sensationalist journalism add further challenges to the region’s media landscape. However, the agency of citizens, along with updated regulations and collaborative efforts, can mitigate the effects of disinformation and uphold peace, justice and strong institutions in the global south.

Renata Mielli

The analysis provided reveals the detrimental consequences of false and misleading information being spread through the Internet and digital platforms. It argues that the Internet has allowed the dissemination of unreliable news and misleading content on a large scale, negatively affecting society. This widespread dissemination of false information has drawn attention to its harmful effects on society, as it undermines the credibility and reliability of information sources and can potentially manipulate public opinion.

The findings also highlight the role of digital platforms in amplifying and promoting misleading, false, and harmful content. It is noted that content with demonstrably false information circulates more widely than verified content, feeding the business models of digital platforms. This is further exacerbated by the use of personal and sensitive data by digital platforms, enabling targeted advertising and content distribution across various platforms. The promotion of such content through sponsored and boosted content has a greater impact on reaching internet users.

In response to these issues, the analysis suggests the need for regulatory initiatives and stricter rules in online advertising. It argues that these regulations should consider specific aspects of information flow, the advertising market, and its actors, as well as how the business models of large platforms favor misinformation. The analysis emphasizes the importance of establishing strict measures for transparency and advertisement, as well as the corporate responsibility of intermediaries and links in the advertising chain in relation to the integrity of public debate.

Moreover, the analysis supports the call for more transparency and stricter rules in online advertising. It advocates for the disclosure of the reach and profile involved in advertisements or boosted content, contributing to accountability and limiting the dissemination of false information. The analysis emphasizes the significance of establishing clear guidelines and measures for transparency and advertisement.

Additionally, the analysis highlights the need for locally designed policies to regulate online platforms. It points to the Brazilian Internet Steering Committee’s consultation process on platform regulations, which addressed issues about concentrations in the online advertising market and the risks of the platform business model, such as disinformation and infodemics. This emphasizes the importance of tailored regulations that consider the specific challenges and dynamics of each region.

The analysis also discusses the challenges of conceptualizing political advertisement and the negative impact of advertisements on health. It acknowledges the difficulty in determining whether political party content should be classified as advertisement or not. Furthermore, it raises concerns about the effect of advertisements on health, particularly during the pandemic, emphasizing that misleading advertisements about medicines can negatively affect people’s lives.

Notably, some arguments within the analysis reject the idea of self-regulation in the advertisement sector. They highlight the impact of advertisements on health and emphasize the need for a more serious public discourse on advertisement. They advocate for increased scrutiny and public engagement to address the negative consequences associated with advertising.

In conclusion, the analysis provides insightful observations on the harmful effects of false and misleading information disseminated through the Internet and digital platforms. It emphasizes the need for regulatory initiatives, transparency measures, and stricter rules in online advertising to protect society from the adverse consequences of misinformation. The analysis also highlights the importance of tailored, locally designed regulations and discusses the challenges surrounding political advertisement and the impact of advertisements on health.

Session transcript

Heloisa Massaro:
So, hello, everyone. Hello, everyone who is here and who is watching us online. Thank you very much for being here. I know it’s almost the last session of the almost last day. So, it’s really great to have you here to hear us. So, thank you for everyone and thank you for our panelists, Renata, who is on my right side, Anna, who is on my left side, and Eliana and Herman, who will be joining us online. So, just to give a quick overview on the topic and why did we propose a session with this topic. So, the topic in general, I mean, what do we want to discuss? We want to discuss what is the role of marketing and advertising dynamics and actors on the information environment, which is the risks, which are the implications, and how can we build, and this is the key issue, how can we build best practices and guidelines for the advertisement industry. And I think it’s worth mentioning that this topic, it’s actually appeared for us through a project we developed last year who we called in Portuguese, Desinfo, and I think it’s worth, I realize that I haven’t presented myself. My name is Eloisa. I am director at Internet Lab. Internet Lab is a Brazilian think tank on digital rights and Internet policy, and we developed last year this project called Desinfo, who mapped, who aimed at developing best practices, actually start developing and start this conversation on best practices and guidelines for the advertisement industry, bearing in mind the role of the marketing industry on the information environment. And this is important because the marketing industry has always had an important role in shaping the information environment and influencing it, and it’s interesting to think about it in two levels. When it’s more economical, structural, that the way the commercial marketing structure itself and where it puts money and where it advertises its pieces, it’s normally a key source. The commercial marketing is normally a key source of funding for information for newspapers, and it has always been. But on the second level, there is also the narrative side of it, that marketing upholds and creates narratives that impact the information environment, and it becomes more prominent in the digital area where information, the production of information is decentralized, and new forms of digital marketing and new strategies of digital marketing appear. And there is also, we can say, there is a crisis on the authority of science and journalism. So, with all this together, this theme becomes even more important when we move online. So, during this project, what we did, and I will stop here and pass to our invitees, is that we actually mapped the initial themes and the initial subjects that related with digital marketing, with actually marketing and advertisement in general, and the information environment. Based on that, we workshopped these themes with Brazilian digital marketing actors. This was a collaborative project developed with a marketing agency, and we workshopped these themes, and the goal was to try to understand how this appeared on their daily work, and how we could move towards guidelines and best practices. And the result of this was what we called a working guide for, actually, in development guide for digital marketing, who covers topics such as influencer marketing and the importance of ethical safeguards in good practices when hiring influencers for marketing, social media ads, and website banners, what we call the programmatic ads, and the importance of developing an understandment of how they work, and who will be financed depending on the choices and the structures, and finally, the last topic, the narratives that can be created and fostered by this information, by advertisement campaigns. And I would say that the key takeaway of these workshops that is on the guide, it’s the importance of embedding risk analysis in which regards disinformation and hate speech on the whole process of developing marketing campaigns and advertisement strategies. So, I would say this was a really rich process that we were really able to engage with a lot of marketing actors in the country, but it was, as I said, like a first step. We wanted, like, to open the conversation. And the aim of this panel is, like, to dig into this topic and, like, to create the opportunity to develop further on the challenges and the possible ways to go under this topic. So, I have said enough, and I will pass. What we will do is a first round of five minutes with our speakers, and then we will open the floor, and then we’ll get back. So, first, I want to invite Eliana Quiroz, who is joining us online, and she is a member of the Board of Internet Bolivia. She holds a PhD, she’s a PhD candidate at Universidade Maior de San Andrรฉs, La Paz, focusing on disinformation’s impact on marginalized communities. She holds a Master in Public Administration and has 20 years of experience in international cooperation agencies, including the World Bank and the United Nations. In 2021, she researched disinformation during Bolivia’s political crisis and authored the first academic handbook on Internet and society in Bolivia. And Eliana will give us a brief overview on the role of digital marketing in this information disorder. Eliana, please, the floor is yours.

Eliana Quiroz:
Hi, Eloisa, thank you. Good morning here, and I guess good afternoon and good evening there and everywhere. So, thank you for the invitation, and your introduction was really great because, especially because Brazil is one of the examples of disinformation and campaigns in the world, and you come with examples from grassroots, so from the practice. I want to share some initial thoughts from my research in Bolivia and also trying to understand what are, which are the actors of a disinformation ecosystem. And when we are talking about that, it’s very obvious that private companies are key actors on the disinformation ecosystem. And I’m talking about, of course, marketing companies, which is the focus of this session. But when I’m trying to identify these marketing companies in the practice, the borders of different private companies offering different services blurs. So, we can find, for example, platforms, digital platforms, giving advice on marketing strategies. For example, it’s very well known that when Meta has big clients, let’s say clients that are going to spend one million, five million dollars in their platforms, in Instagram and Facebook, they bring some intermediary between this client, this big client, let’s say, and Meta. And this intermediary is there to help them to micro-target and direct the ads in the best way. And in that moment, this intermediary is bringing some services around marketing, digital marketing strategies. And in Bolivia, for example, this intermediary was a bureau of lawyers. But in Peru, for example, it’s El Comercio. It’s a well-known mainstream newspaper. Or, for example, when I’m talking about blurring these borders that are not so clear, marketing companies offering databases or even data science services. So, my first point here is that when talking about marketing private companies as an actor of this information ecosystem, we are really talking about private entities, different private entities, bringing services and, of course, having an interest of making money out of this business. So, we should try to understand each ecosystem and the practice as they work in each country to identify not only marketing companies, but different private companies or private actors, private entities. The second idea is that these companies have a model, and the model is Cambridge Analytica. So, when you have a lot of money and a lot of interest, you will have the whole model of Cambridge Analytica. And when there is less money or less time, we will have only some parts of this model. And I’m talking about this, thinking about the South. In the South, you will find, for example, some countries that do have a lot of money to spend. So, it’s perhaps the case of Brazil, Mexico, the Philippines. And you will find almost the whole model. But sometimes there are less money. So, you will find perhaps even not marketing companies, but influencers or content creators or what we talk, TikTokers, bringing these services of marketing, digital marketing campaigns. And even also journalists, journalists that are having a very hard time because media shortage, because the media, the model, the business model of media are in crisis. And many journalists are in the streets without employment, but they know about the information flows. So, perhaps we will find some journalists creating some part of the services of marketing digital strategies. So, in the South, I would say, you will find a wide range of marketing services, digital marketing services for campaigns, for disinformation campaigns. So, again, when we are looking at a country and a specific country, in a specific country, it’s good to understand that you will find different actors. A lot of actors playing some part of the roles of the ecosystem of disinformation. I would say that to begin. Then we will dig a little bit more on perhaps the solutions or the way forward.

Heloisa Massaro:
Thank you, Eliana. I will pass now the word to Ana Kompanec, who is here in person with us. Ana is Director for Global Programs at the Center for International Private Enterprise to manage a portfolio of programs spanning emerging and frontier markets around the world in SIPI’s core teams of business advocacy, strengthening entrepreneurship, ecosystem, and institutional trust, economic inclusion, and organizational resilience. Kompanec holds a BA in International Studies from Indiana University of Pennsylvania, a master’s degree in German and European studies from Georgetown University School of Foreign Service, and an MBA from George Mason University. She is a Certified Compliance and Ethics Professional International and a graduate of the U.S. Chamber of Commerce Institute for Organization Management. And Ana will dig into the role of the private sector and how can we work towards developing good practices and accommodations.

Anna Kompanek:
Thank you so much, Eloise. And I feel like I should start with an explanation. You know, when we say private sector in fora like IGF, typically what comes to mind is big tech companies. Here we talked about marketing companies. I want to expand that definition a little bit further and focus on a different segment of the private sector, which perhaps I could call just local business community for more clarity, because that is the segment that my organization, Center for International Private Enterprise, or CIPE, or SIPE, if you speak Spanish, or I guess Portuguese works as well. That is the market segment, if you will, that we work with. And I have to say in conversations about combating disinformation and building healthier information spaces, the role of the local private sector as a sort of stakeholder and potentially an ally is not often talked about. So I appreciate this opportunity. Because ultimately, so you mentioned the marketing companies. Ultimately, there is also a question, what about the companies that pay to have their advertisements placed in different online spaces through the marketing agencies? So what we’re seeing in countries around the world is in many cases, for maybe just an issue of basic lack of knowledge, you know, companies don’t necessarily lack of knowledge, you know, companies don’t necessarily think about how their marketing spends may be contributing to disinformation, because their, you know, basic metric when they buy ads is eyeballs, right? How many eyeballs are seeing this ad? Does it help us generate more sales and so on? But they don’t always consider other risk factors, such as, you know, some of the advertisements, for instance, may appear on websites that are well known to be associated with disinformation or disreputable in some other ways. So there’s just a basic question of sort of sensitizing companies who pay for advertising to think beyond, you know, what are some other ramifications of where that money goes and wherever ads appear? And of course, I want to make it clear, you know, local business community in any country is not a monolith. So companies themselves also may be contributing to disinformation. In many cases, it’s commercially motivated disinformation when perhaps we publish or pay for coverage that is not factually accurate, let’s just say of our competitor. So in some ways, there might be contributors to the disinformation problem. And of course, if our advertising spends supports in directly or indirectly disinformation, that’s a problem. But we are also victims of disinformation, be it through just direct impact on their brand, and also more broadly, through the declining quality of the overall information space. So if the overall quality of journalism in a country suffers, ultimately, those companies may not be able to get, you know, economic information, policy information that’s trustworthy, and that is crucial to their operation. So when we think about who sort of the key ally would be, that doesn’t necessarily mean that every company in a given country is interested in doing something about combating disinformation. They may not be, they may be actively involved in spreading it. In many cases, you have state-owned companies or otherwise politically controlled companies that may also be not so great actors. But I would say there’s a growing segment of companies and they are a worthy ally who recognize the dangers and who also, frankly, see the business case to, you know, improve their own conduct or their own information footprint, if you will, and also to support healthier information spaces, and not just through marketing spend. There are many other ways in which companies can be constructive actors in supporting independent journalism. If we have time and the conversation goes that way, I’ll be happy to highlight other examples. For now, let me just mention that one of the resources that may be of interest to the audience here is a report that my organization and the Center for International Media Assistance, or CIMA, worked on together jointly. It’s called Investing in Facts, how the business community can support healthy info spaces, where we did sort of a global scan of different ways in which private companies can be involved in supporting ethical independent journalism and strong independent media spaces. Ethical advertising is one of those ways but there are others and I’ll be happy to get into that if we have time. Thank you.

Heloisa Massaro:
Thank you Anna. This is really a great point and was actually one of our takeaways also from the project that actually engaging in countering or in ethical information ecosystem is also something that it’s important for the companies and the brands itself because it helps also with their public relations. So thank you. Thank you so much. And now we will go to Herman Wasserman. Herman is professor of the Department of Journalism at Stellenbosch University. He’s joining us online today. He currently holds a professorship in media studies at the University of Cape Town and previously directed the Center for Film and Media Studies. An accomplished alumnus of Stellenbosch University, Wasserman’s academic journal spans esteemed institutions both in South Africa and the United Kingdom. His extensive research in media, democracy and society has earned him international recognition leading to memberships and leadership roles in permanent academic association. Herman, thank you for joining us today and Herman will actually cover for us a little bit today the disparities on the comprehension of this information between the global north and the global south and how this can also impact the discussion we are having here today. So please Herman, the floor is yours.

Herman Wasserman:
Hello everyone. It is a great privilege to be joining you unfortunately not in person but remotely. I have received two questions. Is that correct? The one on the disparities and the second also then on the role of advertisement. So I’ll say something very brief on them both to allow for more time for discussion and questions as this is obviously not the optimal way of making a broad contribution but it’s maybe just some points to consider. So I think in terms of the first question, considering the disparities between global north and south and how this information dynamics manifests in these regions, I think there are two points or two main points to consider in this regard. I think firstly it is that this information has existed in the global south for a long time. We have seen recently that it has become a preoccupation in scholarship and policy debates in the global north. We can track that and we have tracked in our research that scholarly production around this information peaked in 2016. No surprise why that is the case around the elections in the US at that point. And from then on it grew very steeply in terms of scholarly research. But when we actually consider the presence of what we now call this information, it is on a continuum with communication strategies, types of communication, propaganda even that have been present in the global south for a long time. And not only disinformation but also the other related issues such as the pressure on the information environment, the pressure on free and accurate exchange, the pressure on the public sphere, all of these things that we now associate with what has been come to call the information disorder. These discourses and these trends have been in the global south for a long time. One could even say I think that the discourses that kept colonialism in place were often a type of disinformation that served to justify the subjection of the colonized. And then in the post colonial era, if we look at my continent, Africa, it is very clear, for instance, even in the post colonial era that states have often limited critical voices by owning and controlling the media, controlling the public sphere, engaging in disinformation campaigns. So what we are seeing today, when we are again seeing that governments in Africa and elsewhere use the excuse of fake news to enact laws that criminalize dissent, there’s a continuity that is important to note. I think it’s really important that we see this in the global south in the historical, in the long historical moment. Also, if we look at foreign influence operations today, I think it is important to recognize that, that there have been foreign influence operations now often draw on historical loyalties, historical presences in Africa, and that there’s this longer historical view. So that’s, I think, the first point to make is that there’s a continuity that we shouldn’t see disinformation in the global south, certainly as something that is entirely new. Um, and that there is also the, uh, we have to understand with within a longer historical perspective, even if these trends and forms of disinformation are now facilitated and amplified through new technologies. It is a continuation of an older threat. I think the second point maybe to make is that we now see a double threat to the information landscape in the global south, both externally by foreign influence operations and internally by repressive states. And that threat is a threat. Also, um, to the threat to the information landscape more broadly, but it is critically also a threat to journalism and to free journalism in the region. Um, and I will get to that when I make a few points about the information environment in the role of advertisement. But I think it is also important to note that, um, citizens and audiences in the global south have agency, and it is important when we think about this information in these contexts that we also recognize and be very alert to the agency that audiences and citizens, um, have and the ways that they are practicing that agency because that can also hold a lesson for the global north. One of the things one of the points that we’ve made in our research is that, um, we should really encourage more attention to the global south and disinformation in the global south, not merely because the global south is important or because Um, you know, more attention should be paid to it. But because there are actually lessons to be learned from the global south experience that can be useful for the global north. And one of these is the way that, um, citizens and activists and organizations, civil society movements in the global south. Um, are using that agency to fight this information through various strategies. One of the interesting strategies that we’ve seen in the research that Internet lab has also been involved with this project that I lead, um Is how the fight against this information in the south is linked with other struggles so that the fight against this information is not seen in isolation, but it is linked with struggles such as the struggle for access Internet access Internet digital rights, media freedom, education and so on. And you know, if we have time, I can elaborate on that maybe in question time, but I mean, there are clear examples of how the organizations and activists in the global south see the country of disinformation as part of broader struggles. Um so activists in the global south know that to empower citizens to stamp out this information, these citizens need access to the Internet. For instance, they need digital rights. They need freedom of expression. They have to have a good basis of media literacy, etcetera. So these struggles are often linked. And when we approach disinformation, the global south, it becomes very clear That we cannot find this information in isolation. We have to see it as part of this broader ecology, broader array of rights and struggles. So if we if I can move on then quickly to the questions of the role of advert advertisement for the information environment and what implications these disparities might have for addressing and mitigating this information. Um I would maybe like to again return to the focus on journalism. Um if we think that critical independent journalism is one of the most important tools we have to fight against this information in the Global south. We also have to think about the threat of disinformation as linked to the threats to journalism in the global south. One of the major threats as I’ve already alluded to is ongoing state pressure and repression. Uh, this is not a new trend. This has been going on for many, many years. But what is particularly pernicious at the moment is that maybe ironically, states are using the fight against disinformation to enact fake news laws, and that if we’ve seen across the south, but especially also in Africa, but I’m more familiar with Um, the fight against this information has become a smoke screen for further oppression, and that has become a very pernicious and a very Important thing to focus on. Um, but when you look at advertising and marketing again, I think there is a double edged sword or maybe a double two sides of the coin. Um, if we look at the role of advertising in relation to journalism, if we take journalism as a as a key component, um, as a key guarantee or a key Um, uh, weapon again in this fight against this information. Advertising can be part of the problem. I think that is we are familiar with those issues misleading advertising advertising that might Look like journalism, but is in fact, um, you know, marketing. The very fact that business models can promote a certain type of journalism that is sensationalist, um, that promotes click bait. That, um, focuses maybe only on elite audiences and leaves large parts of highly unequal societies without access to media agendas. All of these aspects of advertising and marketing in relation to journalism. I think we are familiar with and can promote can create problems, um, in terms of journalists ability or journalism’s ability to fight this information. But I think an aspect that we often lose sight of is that advertising is also important for organized news organizations in in the South, especially when it comes to small independent media outlets where the state owns and controls many media outlets. These small independent media outlets are often, um, under severe economic threat. We’ve seen during the covert pandemic. Um, how many smaller community organizations, community media, um, independent media and on the continent have had to close down. Or had to severely scale back their operations. And in this regard, advertising and I can actually be in a way for smaller community outlets to sustain themselves. That’s obviously not the only model. There are donor based models and philanthropic Models and so on that are really important to explore. But advertising is one of those avenues. And then I think what we increasingly hear from these news organizations is that the way that advertisements in the online environment are sucked up by big platforms like Google. Um, the result is that local news outlets lose an important source of revenue or get a very small part of revenue, and that threatens their sustainability. Another aspect to point to is that the precarious economic environment large parts of the global South also means that companies often cut back on advertising. So with whenever there’s an economic downturn, whether there’s an economic pressure, and that is something that the global South, um, characterizes the global South almost universally under such pressure, um, advertising dries up. Um, and and so that also becomes a problem for, um, a lot of Problem for, um, news outlets, and then often opens the door for more. Um, sort of a capture of these news organizations by those people that have money and influence and renders them more vulnerable. To disinformation. So I think when we look at advertising in the global South, we have to, um And it’s relation to disinformation and journalism. We have to understand that it’s a it’s a complex issue that there are different aspects to consider. Um, and that one has to take context into account. I think throughout uh, when we when we study disinformation, the global South, um, throughout the global South, it’s clear that context is increasingly or is incredibly important and that we cannot just import models of understanding and analysis from the global North to understand the problem in the global South. We have to look at this problem within context and within the local specificities. So I’ll leave that there. Um, those are my initial comments and be happy to hear any questions or feedback. Thank you.

Heloisa Massaro:
Thank you so much, Herman, for the great overview, and I will pass now to Renata Mielli. Renata is journalist with a bachelor’s degree in social communication from Faculdade Casper Libero. She’s currently pursuing her doctorate in communication science program at the School of Communication and Arts at the University of Sao Paulo, and she holds the distinction of being the first female coordinator of the Brazilian Internet Steering Committee, the CGI, a multi-stakeholder entity responsible for, among other duties, for establishing strategic guidelines related to the use and development of Internet in the country. So, Renata, please.

Renata Mielli:
Thank you. Thank you, Heloisa. Thank you for the Internet Lab to the invitation for this section. I think this dam is very important. In Brazil, we are discussing this a very, very long time. Well, I have some notes here and a few reflections about this problem. The massive dissemination of false and misleading information news has currently drawn attention to the harmful effects it has produced in society. The challenges in developing actions, too, on the one hand, protect fundamental rights such as freedom of expression, privacy, and access to information, and, on the other hand, preserve the respect for cultural diversity its paramount. This information isn’t a phenomenon as old as the history of the press. Historically, the content value chain has been dependent to a greater or lesser extent on the sale of advertisements. Advertising has played a role not only in promoting journalism but also in promoting access to information. Concerns related to the independence of news production and the use of advertising funds to manipulate public opinion are also not new. The Internet, however, has allowed the dissemination of false and misleading news and information to reach unimaginable levels, and its negative effects on society have become even more severe. Understanding this phenomenon necessarily involves understanding the emergency of a network of motivations for the creation, dissemination, and consumption of false and misleading content that amplifies information disorder and is related to the business models of digital platforms. In this sense, the use of the term disinformation industry is appropriate to describe the continuous increase in complexity and size of production chains and networks of factors that emerge as stimulated by high financial investments mostly funded by advertising. Digital platforms have captured an important part of the advertising market, amplifying content through the use of personal and sensitive data. An important part of this content is misleading, false, harmful, and illegal. Research has suggested that content with demonstrably false information circulated more than verified content feeding digital platforms’ business models. Content moderation regulation faces issues as the profound lack of transparency on the development of advertisement and the algorithms that showcase them. Beyond that, intermediary liability regimes, based on the principle of non-liability of the networks, are bringing questioned rising issues yet to be settled. As sponsored and boosted content has greater capacity to reach internet users across different platforms, it is fundamental to investigate the damage it causes to the production of information and news and the role advertisement plays in these processes, especially in a scenario of massive collecting and use of personal data to profile users and target propaganda. Regulatory initiatives need to take into account both the specific aspects of information flow, the advertising market and its actors, as well as how the business models of large platforms favor this information. In order to define strict policies that enable a health informational environment, some directives may be considered. Regulating the role of influencers in programmatic media, this is a very big problem we have. Influencers now have more audience than newspapers and journalists. Establishing strict measures for transparency and advertisement, also considering sponsored and boosted content in social media, such as advertising libraries served by digital platforms and disclosure of the reach and profile involved in the ad or boosted content. I think corporate responsibility of each intermediaries and links in the advertising chain in relation to the integrity of the public debate, as suggests in the booklet formulated by the Internet Lab, called Public or Fake or Ad or Fake. Other initiatives we hope may be proposed in the discussions carried out in this session. Finally, the Brazilian Internet Steering Committee carried out a broad consultation process on platform regulations, which, among other issues, involved questions about concentrations in the online advertising market and the risks of the platform business model, such as disinformation and infodemics. The consultation has more than 20,000 contributions from individuals and organizations of different sectors of society. The analysis of its results is still going on, and we hope that it can be of great value for the formulation of an innovative and locally designed policy. That’s my first. reflections. Thank you for the opportunity.

Heloisa Massaro:
Thank you Renata. Now we are going to open the mic for not only questions but also considerations and comments and thoughts. So we please those who want to make an interventions and are here, use that mic over there. And for those who are online you can either send this via Q&A or perhaps raise your hand and we can monitor for allowing you to intervene.

Audience:
So any? I guess I’ll ask a question if people don’t want to ask questions. So I’m from the National Democratic Institute, Dan Arnato. I’m curious, you didn’t talk too much about political advertising and that’s a lot of what, you know, we engage in monitoring at NDI and other election observation groups. So I’m curious, you know, the role particularly of political advertising and how that could, you know, better be managed using different kinds of interventions, whether they’re legal, you know, mechanisms to control that, whether they’re monitoring systems. I think Cambridge Analytica, as you mentioned, like really demonstrated some of the challenges we have in terms of data that could be used for targeting and just it’s a problematic component because I think that kind of information is useful for research and for other purposes, but it is unfortunately, you know, a problematic component of kind of our modern political systems that these systems can really become weaponized. So curious about your perspective on that piece.

Heloisa Massaro:
Thank you, Daniel. We have one more.

Audience:
My name is Juliano. There is a difficulty in separating the acting of digital influencers in their own work and as political marketeers. Part of advertisement funds is dedicated to political campaigns. So I’d like to hear from the panelists a little bit of how could we develop a kind of regulation that could look into how advertising are fomenting disinformation in political campaigns as it’s so difficult to separate political content than other all kinds of content that are circulating in the Internet. Thank you. Hi, my name is Juliana. I’m a technical advisor from the Brazilian Internet Steering Committee and in Brazil we have a self-regulating council for government advertisement. So I would like to know how the measures to mitigate those risks mentioned can articulate with the self-regulatory frame and with the regulatory from the state. Maybe, for instance, if we could demand more transparency from this influencers through the self-regulatory council or maybe we have too many problems like regulatory capture in these councils and a lot of other difficulties. And if we can adapt these spaces and advance or maybe we should trust in regular regulation and this I think how these things interact. Thank

Heloisa Massaro:
you. Thank you Juliana. Anyone else or do we get back to the panel? Okay, so we are getting back to the panel and before passing the word to my colleagues I would actually like to add something to Juliana’s question which was really great and mentioning that during the project we were developing we actually mapped some of the measures, this kind of self-regulatory bodies for advertisement and how they interact with these issues and it’s interesting that normally there is a couple of safeguards in place or self-regulatory norms that targets well what would be disinformation on the narrative at the level of misleading consumers but when you go beyond that when the narrative it’s the problem with the narrative is less about the project and more about how it can uphold other types of disinformation or even hate speech depending on how you build the narrative and when you’re speaking about how advertisement may finance or be a source of funding for disinformation outlets then there is a limit on what we have until today for the self-regulatory bodies and I think this is one of the challenge how do we think this way forward is this something that should we cover with regulatory approaches like state regulatory approach and now in Brazil and Renata can speak more about that than me we are discussing this on the fake news build with the platform regulatory build that has something on a advertisement but there is a lot there is a long way to go and there is this space that there’s not so many parameters and safeguards and I will stop here to let my colleagues speak and I will go actually backwards now so I’ll pass first to Renata and then I will go with Herman and Anna and Eliana.

Renata Mielli:
Well, three very good questions I cannot answer all of them but just a reflection because this challenge into how we can conceptualize political advertisement is very difficult because it’s a very thin line between the freedom of expression, the freedom of flow of the ideas, the political ideas so conceptualize this is very difficult and is a challenge to address some good practice to avoid disinformation in this but we are all dealing with that in Brazil we have passed through two elections when the flow of disinformation content in political debate was enormous but I think it’s very difficult to categorize political advertisement. What is this? We are talking about when a political party do some content this is advertisement or not? Just a question for our reflection how we how we manage with this so this is a very big problem and it’s not easy to to face it another comment is about what Juliana bring to us and I deal with that before working with internet when I am from civil society discussing the democratization of communication in Brazil and our private sector on advertisement always said that there is a new right that we have to put on human rights I don’t know that is the advertise how can I say that advertisement free yes I don’t know free speech of advertisement as a new whole in the human rights so they use this expression how did you translate free the free speech of our advertisement they use this to avoid any kind of regulation and I myself I don’t believe in self-regulation in the sector I think we need another kind of approach of course there is an importance of this kind of structure but we have to have a space and public space to discuss advertisement in a more serious way and we didn’t talk about this but political is a problem but we have problems with health when we have advertisement about medicines and we have problems we saw this in the pandemic and this is a very big problem because this affected people’s lives so that’s only a few comments

Heloisa Massaro:
thank you so much Renata and now back to Herman who is online

Herman Wasserman:
I won’t say much more than the previous speakers have said because I think you know a lot of that resonates in the South African context we do have regulations for political advertising but they come from a sort of previous era pre-social media really so the advertising of political parties prior to elections on say broadcast channels and newspapers that’s fairly well regulated but what happens on social media I think is less easily regulated also just to confirm with the previous speakers what we define by advertisements increasingly political parties are using all sorts of other ways of guerrilla marketing and things of campaigns that you know is not as easily definable in this regard I would say that what is important is regulation but even maybe more importantly is the sort of coalition of journalists and civil society organizations to also interrogate what political parties are saying to fact check their claims to make audiences aware of the source of claims and campaigns and marketing strategies so I think regulation in itself formal regulation is not enough it is important that this is also forms part of a broader let’s say awareness raising and a broader systemic orientation towards political communication from journalists, civil society organizations etc.

Heloisa Massaro:
Thank you Herman and now back to Anna.

Anna Kompanek:
So I won’t necessarily comment on political advertising since that’s not specifically the issue we’re looking at but I just wanted to re-emphasize the point that Herman made in his earlier remarks that independent journalism is the key weapon in combating disinformation and speaking from the perspective of the private sector as I said there are many ways that local private sector local businesses can help invest in healthy infospace in independent journalism beyond ethical advertising through impact investment or blended finance or corporate philanthropy or thinking about it as a part of their CSR and with that corporate mindset of thinking about their sort of impact in the information space you know we do see local private sector also involved in you know just having a voice as policies that govern information space may be made you know in Armenia for instance we work with a local business organization that has provided input into national strategy against disinformation so kind of just a broader principle that you know whatever laws are being passed the biggest danger is the government just passing the law without any kind of consultation on input from civil society and also from from the local private sector and there is also a value that we see in bringing local business organizations together as part of the broader coalitions that were mentioned before to talk about the the issue of securing the information space mapping it out thinking about incentives for private sector investment in independent media and in our work we see that for instance in the Philippines where we help bring together the Philippine Association of National Advertising and the Makati Business Club which is one of the major business organizations in the country to talk about this particular issue which may not necessarily be kind of a natural topic for for entities like that so we’re just you know let’s be creative about which stakeholders are involved and what collaborations are possible.

Heloisa Massaro:
That’s really interesting thank you Anna and now back to Eliana.

Eliana Quiroz:
Thanks building on Anna’s response also yes I guess it’s a key to understand which actors are playing so I would say taking the question about political advertising it will be like following the money and also following or understanding which entities are part of the of this ecosystem so to bring some transparency on which actors are participating and bringing transparency when I’m thinking about bringing transparency for example is to understand in a specific election for example we should know which marketing companies are taking part which are contracted by which political party for example but not only marketing companies but also for example influencers there are many influences that are contracted by political campaigns so it’s good to know who are bringing some information paid and not only like advertising but advertising like directly in the platforms but also understanding which data providers we have there which data analyst companies are playing some kind of role which audiovisual media production companies communication digital and public relations companies and also fact checking and public opinion companies that are bringing some services during an election and I’m thinking about during an election because it’s very it’s delimited it’s kind of a special moment it’s not like the broader or any time in the in the political life but it’s a very specific and it’s possible to bring some regulations by the authority the electoral authority and the second idea there could be that it’s interesting like to understand that some companies are not really aware of some human rights framework like business and human rights framework and it’s very good to include them into the conversation and bring some knowledge about these frameworks to let them know that what is allowed when it’s not allowed and then of course I really do think regulation is part of the solution but it’s very we know that it’s very complicated because we have also to take care of freedom of expression but yet regulation not only to the platforms but also to some actions of other companies are part of the solution and I will stop there.

Heloisa Massaro:
Thank you Eliana and we reached our limit of time so I would like to thank you all our panelists today I think it was a really interesting discussion and I think there are some takeaways or at least some points we can map from this discussion that actually not only the the difficult of defining political advertisement but how when there is a blurred line also between what is commercial advertisement and political advertisement and we have seen this in Brazil in the last election when we do have brands that are engaging politically and where the line of free speech also can be drawn or cannot be drawn and also the importance not only of advancing regulation but also of engaging different actors because despite the fact that we may have actors that have bad intentions within the ecosystem we may have we actually have a large amount of actors that are there to be engaged and to be included in the conversation of business and human rights so I would like to thank you everyone for being here today and thank you for everyone who stayed at almost seven o’clock today with us and I hope you have a good rest of IGF and a good rest of Wednesday.

Anna Kompanek

Speech speed

145 words per minute

Speech length

1143 words

Speech time

473 secs

Audience

Speech speed

114 words per minute

Speech length

398 words

Speech time

209 secs

Eliana Quiroz

Speech speed

124 words per minute

Speech length

1066 words

Speech time

514 secs

Heloisa Massaro

Speech speed

130 words per minute

Speech length

2104 words

Speech time

975 secs

Herman Wasserman

Speech speed

166 words per minute

Speech length

2249 words

Speech time

812 secs

Renata Mielli

Speech speed

109 words per minute

Speech length

1152 words

Speech time

635 secs

Exploring Blockchain’s Potential for Responsible Digital ID | IGF 2023

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Judith

Vicky expresses gratitude and greets the audience, creating a positive and welcoming atmosphere. The speaker’s tone and appreciation set the stage for an engaging interaction.

Joey

The project had several positive outcomes for Ugandan students. Firstly, it provided exposure to technology and hands-on experience. Students had the opportunity to interact with students from Japan, which not only helped them develop their cross-cultural skills but also sparked an interest in technology. This exposure to different cultures and technology is important for their educational development and future career prospects.

Furthermore, the project had a significant impact on language and social learning. Students were able to engage in interactive language practices and received artistic feedback on their language skills. They also had the chance to express themselves in both Swahili and English. This not only improved their language proficiency but also facilitated their social and emotional learning.

However, the project faced challenges in integrating technology due to limited resources and budget constraints. The local setup, Gudu Samaritan, struggled to invest in technology because of these constraints. This highlights the need for adequate funding and resources to ensure the successful integration of technology in education.

Another obstacle was the unstable internet connection, which hindered online participation. This limited students’ ability to fully engage in online activities and access educational resources. Stable and reliable internet connection is crucial for effective technology integration in schools.

Regarding curriculum integration, there is a need to engage with the Ministry of Education. Engaging with the Ministry would ensure better resource allocation and adjustment of teaching methods to effectively integrate the project into the curriculum. This collaboration is necessary for the long-term sustainability and impact of the project.

Funding was deemed crucial for projects that integrate technology into schools. The government should provide infrastructure, such as a stable internet connection, for successful implementation. Additionally, schools like Gudu Samaritan require resources like an intelligence system, robots, and computer equipment to fully leverage the benefits of technology in education.

Another important aspect is promoting literacy in online platforms. All students and teachers should be literate in AOL (online platforms). This would ensure equal access to information and opportunities. Educators should be given the opportunity to participate in online workshops and training to gain confidence in incorporating technology in their everyday teaching.

In conclusion, the project had various positive impacts on Ugandan students, including exposure to technology, cross-cultural interaction, and development of language and social skills. However, challenges such as limited resources, budget constraints, unstable internet connection, and the need for curriculum integration must be addressed for the successful integration of technology in education. Adequate funding, collaboration with the Ministry of Education, and promoting literacy in online platforms are essential for the continuation and growth of such projects.

Ruyuma Yasutake

The HARU project has had a positive impact on English conversation classes, enhancing the overall learning experience. HARU, an advanced AI-based interactive robot, helps to create smoother and more engaging conversations by responding to moments of silence and using interesting facial expressions. This not only makes the conversations more enjoyable but also creates a dynamic learning environment. The use of HARU has also facilitated cross-cultural interaction by connecting students from different countries. This provides a unique opportunity for meaningful conversations and a better understanding of different cultures. While there have been some challenges, such as system troubles and interruptions in interactions, the overall experience has been positive. HARU also offers the opportunity for students to interact and work with professional international researchers, which enhances their learning. Furthermore, HARU has the potential to connect students from different countries, promoting global collaboration in education. Additionally, HARU can be used as a partner for practicing conversations, allowing students to improve their conversation skills in a supportive environment. The use of AI’s evaluation system in education also holds promise for fairer assessments, reducing biases and promoting fairness. In conclusion, HARU has numerous benefits and, with further advancements and improvements, has the potential to revolutionize education and communication.

Randy Gomez

The Honda Research Institute, led by Randy Gomez and his team, responded positively to UNICEF’s call to implement and test policy guidance. They dedicated a significant portion of their resources to developing technology for children, with a focus on creating a system that enables cross-cultural interactions among groups of children from different countries. This system involves a robot facilitator that connects to the cloud, allowing children to interact regardless of their geographical locations.

The team conducted experiments using interactive games facilitated by the robot to evaluate the effectiveness of their technology in promoting cross-cultural communication. The results were overwhelmingly positive, demonstrating the efficacy of the technology in enabling these interactions.

In addition to developing the technology, the team recognized the importance of understanding its societal, cultural, and economic impact on children from diverse backgrounds. They deployed the robots in hospitals, schools, and homes to gather insights into implementing the technology in different settings. They collaborated with Vicky from JRC and applied their application alongside IEEE standards to ensure industry compliance.

Overall, the Honda Research Institute’s work contributes to the United Nations’ Sustainable Development Goals, specifically in reducing inequalities, ensuring quality education, and promoting industry, innovation, and infrastructure. The technology they developed for cross-cultural interactions among children fosters understanding and connectivity. It has the potential to create a more inclusive and globally connected society, while also shedding light on the societal, cultural, and economic effects of robotic technology on children’s development.

Steven Boslow

Artificial Intelligence (AI) technology is increasingly present in the lives of children, being used in areas such as gaming, education, and social apps. These AI systems have the power to influence significant decisions, including those related to health benefits, loan approvals, and welfare subsidies. However, it is concerning that most national AI strategies in 2019 did not adequately consider children as stakeholders. This lack of recognition of children’s rights in AI policies highlights the need for improvements.

Moreover, the existing ethical guidelines for AI do not sufficiently address the unique needs of children. These guidelines are not specifically tailored to tackle the challenges and risks that children may face with AI technologies. This oversight is worrisome, considering the substantial impact that AI can have on children’s lives.

On a positive note, UNICEF, in collaboration with the Finnish Government, took an initiative in 2019 to address this issue by introducing policy guidance on AI and children’s rights. This guidance aims to provide a framework for responsible and ethical use of AI concerning children. Several organizations have since implemented these guidelines and shared their experiences and lessons learned. The implementation of UNICEF’s guidelines is a crucial process in safeguarding the rights and well-being of children in the context of AI.

Recognizing the fact that children make up approximately one-third of all online users and an even higher proportion in developing countries, it becomes evident why prioritizing children’s rights is essential. While AI presents great opportunities, it also poses significant risks for children. Therefore, it is important to establish robust regulations that effectively protect their rights while enabling the positive utilization of AI technology.

In conclusion, the increasing presence of AI in children’s lives emphasizes the need for them to be recognized as key stakeholders in national AI strategies and ethical guidelines. UNICEF’s efforts to develop and implement guidelines specifically addressing AI and children’s rights are commendable. They highlight the importance of prioritizing children’s needs and ensuring their protection in the development of AI regulations. To ensure a safe and beneficial AI environment for children, continuous improvement of policies, guidelines, and regulations that cater to their unique requirements is essential.

Moderator

According to the analysis, children were not adequately recognized in national AI strategies or ethical guidelines for responsible AI. This lack of recognition raises concerns about the potential negative implications AI could have on children.

One of the key findings is that AI is increasingly being used in education and gaming, indicating it has become an integral part of children’s lives. Given the significant number of children who are active online users, particularly in developing countries, the impact of AI on their lives cannot be ignored.

Furthermore, the analysis highlights that adopting responsible AI or technology can be challenging. Applying principles for responsible AI can cause tensions to arise, and the context in which these principles are applied is crucial. Developing effective regulations and policies concerning AI requires careful consideration of the specific needs and vulnerabilities of children.

The analysis also emphasizes the importance of prioritizing the role of AI in children’s lives when it comes to regulation and policy-making. It highlights the potential risks AI poses, such as providing poor mental health advice or infringing on children’s privacy. These risks underline the urgent need to establish robust guidelines and safeguards to protect children’s well-being and rights in the context of AI.

Additionally, the Honda Research Institute’s development of robotic technologies for children in response to UNICEF’s call for policy guidance implementation and testing is noteworthy. This initiative demonstrates the commitment to address the specific needs and challenges faced by children in an increasingly AI-driven world.

Collaboration between urban students from Tokyo and rural students from Uganda was a significant aspect of the analysis. This collaboration aimed to enhance intercultural understanding and explore the variations in children’s rights comprehension across different situations. This emphasizes the importance of context in comprehending and addressing children’s rights issues.

Moreover, the role of technology in education was found to have a positive impact on students’ understanding and interest. The projects analyzed contributed to the development of social and emotional skills, further reinforcing the potential benefits of integrating technology in educational settings.

However, the analysis also identified several challenges. Limited resources and budget constraints were major obstacles, particularly in the context of a local setup called Gudu Samaritan in Uganda. These constraints made it difficult to invest in technology and maintain stable internet connections, hindering the implementation of projects.

To overcome these challenges, the analysis suggests engaging the Minister of Education in Uganda to integrate the project into the curriculum and secure additional resources. This approach would not only address budget constraints but also provide the necessary time and support to adapt teaching methods effectively.

In conclusion, the analysis highlights the need for greater recognition of children in AI strategies and ethical guidelines. It underscores the importance of considering the specific needs and vulnerabilities of children when developing regulations and policies related to AI. The potential risks associated with AI, such as issues related to mental health and privacy, call for the implementation of comprehensive safeguards. The analysis also sheds light on the positive impact of technology in education, particularly in enhancing students’ understanding, interest, and social and emotional skills. However, challenges such as limited resources and budget constraints must be addressed through collaborative efforts involving government bodies and educational institutions. Overall, a comprehensive and child-centric approach to AI and technology adoption is essential to ensure the well-being and rights of children in the digital age.

Session transcript

Moderator:
So, welcome to our session on UNICEF implementation, UNICEF policy guidance for AI and children’s rights. This is a session where we are going to show how we, our team, extended team, tried to implement some of the guidelines that UNICEF published a couple of years ago. I would like to welcome, first of all, our online moderator, Daniela DiPaola, who is a PhD candidate at the MIT Media Lab. Hi, Daniela. And she’s going to help for the online and the decent speakers. And here we have also, I would like to invite Steven Boslow and Randy Gomez to come, our mic organizers, to come and on the stage and we can set the scene to start the meeting. Thank you. So first, let me introduce Steven Boslow. Steven is a digital policy innovation and ad tech specialist with a focus on emerging technology. And currently, she’s a digital foresight and policy specialist for UNICEF based in Florence, Italy. Steven was the person behind the policy guidance on AI and children’s rights at the UNICEF. And Steven, you can probably explain more about this initiative. Thank you.

Steven Boslow :
Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a digital policy specialist, as Vicky said, with UNICEF. And I’ve spent my time at UNICEF looking at the intersection mostly of emerging technologies and how children use them and are impacted by them and the policy. So we’ve done a lot of work around AI and children. Our main project was started in 2019 in partnership with the government of Finland and funded by them. And they’ve been a great partner over the years. So at the time, 2019, AI was a very hot topic then as it is now. And we wanted to understand if children are being recognized in national AI strategies and in ethical guidelines for responsible AI. And so we did some analysis and we found that in most national AI strategies at the time, children really weren’t mentioned much as a stakeholder group. And when they were mentioned, they were either needing protection, which they do, but there are other needs, or thinking about how children need to be trained up as the future workforce. So not really thinking about all the needs, unique needs of every child and their characteristics and their developmental kind of journey and their rights. So we also looked at ethical AI guidelines. In 2019, there were more than 160 guidelines. Again, we didn’t look at all of them, but generally found not sufficient attention being paid to children. So why do we need to look at children? Well, of course, at UNICEF, we have our kind of guiding roadmap is the Convention on the Rights of the Child. The children have rights. They have all the human rights plus additional rights, as you know. One third of all online users are children. And in most developing countries, that number is actually higher. And then thirdly, AI is already very much in the lives of children. And we see this in their social apps, in their gaming, increasingly in their education. And they’re impacted directly as they interface with AI, or indirectly as algorithmic systems kind of determine health benefits for their parents, or loan approvals or not, or welfare subsidies. And now with generative AI, which is the hot topic of the day, AI that used to be in the background has now come into the foreground. So children are interacting directly. So very briefly, at the time after this initial analysis, saw the need to develop some sort of guidance to governments and to companies on how to think about the child user. And as they develop AI policies and develop AI systems. So we followed a consultative process. We spoke to experts around the world. Some of the folks are here. And we engaged children, which was a really rich and necessary step. And came up with a draft policy guidance. And we recognized that it’s fairly easy to arrive at principles for responsible AI or responsible technology. It’s much harder to apply them. They come into tension with each other. The context in which they’re applied matters. So we released a draft and said, why doesn’t anybody use this document? And tell us what works and what doesn’t. And give us feedback. And then we will include that in the next version. And so we had people in the public space apply it, like YOTI, the age assurance company. And we also worked closely with eight organizations, two of them are here today, Honda and JRC, Honda Research Institute and JRC, and MEC3D. And Judith is on her way. And basically said, apply the guidance. And let’s work on it together in terms of your lessons learned and what works and what doesn’t. So that’s what we’ll hear about today. It was a really real pleasure to work with JRC and Honda Research Institute and to learn the lessons. And so, yeah, just in closing, AI is still very much a hot topic. It’s an incredibly important issue to get right or technology to get right. It is just increasingly in the lives of children, like I said, with generative AI. There are incredible opportunities for personalized learning, for example, and for engagement with chatbots or kind of virtual assistants. But there are also risks. That virtual assistant that helps you with your homework could also give you poor mental health advice. Or you could tell it something that you’re not meant to and there’s an infringement on your privacy and on your data. So as the different governments now try to regulate AI and regional blocks and the UN trying to coordinate, we need to prioritize children. We need to get this right. There’s a window of opportunity and we really need to learn from what’s happening on the ground and in the field. So yeah, it’s a real pleasure to kind of have these experiences shared here as bottom up inputs into this important process. Thank you.

Moderator:
Thank you so much, Stephen. Indeed. And at that point, we had already some communication with UNICEF through the JRC of the European Commission, but already we had an established collaboration with the Honda Research Institute in Japan, evaluating the system in different technical, from a technical point of view, trying to understand what is the impact of robots on children’s cognitive processes, for example, or social interactions, et cetera. And there is an established field of child-robot interaction in the wider community of human-robot interaction. And that was when we discussed with Randy to apply for this case study to UNICEF. And I think Randy now, he can give us some of the context from a technical point of view, what this meant for the Honda Research Institute and his team. Randy?

Randy Gomez:
Yeah. So, as what Stephen mentioned, so there was this policy guidance and we were invited by UNICEF to do some pilot studies and to implement some and test this policy guidance. So that’s why we, at Honda Research Institute, we developed technologies in order to do the pilot studies. So our company is very much interested with looking into embodied mediation where we have robotic technologies and AI embedded in the society. And as I mentioned earlier, as a response to UNICEF’s call to actually implement the policy guidance and to test it, we allocated a significant proportion of our research resources to focus into developing technologies for children. In particular, we are actually developing the embodied mediator for cross-cultural understanding where we developed this robotic system that facilitates cross-cultural interaction. So we developed this kind of technology where you have actually the system connect to the cloud and having a robot facilitates the interaction between two different groups of children from different countries. Before we do the actual implementation and the study for that, through the UNICEF policy guidance, we tried to look into how we could actually implement this and looking into some form of interaction design between children and robot. So we did deployment of robots in hospitals, schools, and homes. We also look into the impact of robotic application when it comes to social and cultural economic perspectives with children from different countries, different backgrounds. We also look into the impact of robotic technology when it comes to children’s development. So we tried some experiments with a robot facilitating interaction between children and some form of game kind of application. Finally, we also look into how we could actually put our system and our pilot studies in the context of some form of standards. So that’s why together with JRC, with Vicky, we look into applying our application with the IEEE standards. And with this, we had a lot of partners, we built a lot of collaborations, which are here actually and we are very happy to work with them. Thank you.

Moderator:
Thank you so much, both of you. So this was to set the scene for the rest of the session today. So as Randy and Stephen mentioned, this was quite a journey for all of us and around this project there are a lot of people, a great team here, but also 500 children from 10 different countries where on purpose we chose to have a larger cultural variability. So we have some initial results and for the next part of the session, we have invited some people that actually participated in these studies. So thank you very much, both of you. And I would like to invite first Ruma. Ruma is one of the students that … Thank you. Ruma, you can come over. Ruma is a student at the high school here in Tokyo and you can take a seat if you want here. Yeah, that’s fine. And he’s here with his teacher and our collaborator Tomoko Imai. And we have online also Joey. Joey is a teacher at the school in Uganda where we tried to implement participatory … Action research, which means that we brought the teachers in the research team. So for us, educators are not only part of the end user studies, but also part of the research. So we interact with them all the time in order to set also research questions that come directly from the field. So we are going to start. You can sit here. Do you want? Or you want to stand? Whatever you want. Sure. Sure. So we have three questions for you first. We would like first to tell us about your experience in this process, participating in our studies.

Ruyuma Yasutake:
We have online English conversation classes once per week in the school. But we often have some problem in continuing the conversation. With our participation in the HARU project, we had a chance to talk with children from Australia with the help of HARU and this made somehow different. For example, sometimes there was a moment of silence. But HARU could feel these moments and made conversation smoother. Also, during the conversation, HARU would make interesting facial expressions and make conversation fun for us.

Moderator:
During the project, we had a chance to design robot behaviors. And we interacted with engineers, which was really nice. Yeah. And during the project, probably you faced some challenges. I mean, there were some moments where you thought that, oh, this project is very difficult to get done. Do you have anything to tell us about this?

Ruyuma Yasutake:
The platform is still not stable. And sometimes there was system trouble. For example, once robot was overheated and could not cool down. So HARU stopped interaction and started again. But overall, the experience was positive because I had a great time talking with professional researchers who are trying to fix the problem. Being able to work with international researchers, it was very valuable experience for me.

Moderator:
Thank you, Rima. And do you want to tell us how would you imagine the future of education for you? I mean, through your eyes, you’re now in education. So if in the near future, you have the possibility to interact more with robots or artificial intelligence within the formal education, how these would look like for you?

Ruyuma Yasutake:
I hope that HARU can help connect many students in different countries. And robot can be a partner to practice the conversation by taking different roles, like teachers, friends, and so on. And probably, use of AI’s evaluation system can be more fair.

Randy Gomez:
OK. So thank you very much, Rima. This was an intervention from one of our students. But next time, probably, we can have more of them. And now I would like you can probably see. Yeah. Thank you. Thank you. You can go. You can take a seat there. I’ll take a seat here. Yeah. The question will be later. And now, probably, we have an online speaker, Joy. Can you hear us, Joy?

Joey:
Yes, I can hear you.

Moderator:
Perfect. So Joy is one of our main co-collaborators. She’s an educator at a rural area in Uganda, in Boduda. Her school is quite remote, I would say. Through another collaborator of ours, the year we had an interaction with her initially, we explained our project to her. And we asked if we could have some sessions. So our main goal to include a school from such a different economic but also cultural background was to see if when we talk about children’s rights, this means exactly the same for all the situations. Does the economic or the cultural context play any role here? So what we did, it was to bring together the students from Tokyo, this urban area, and the students from Uganda to explore the concept of fairness. So we ran studies on storytelling. And we asked children to talk about fairness in different scenarios, everyday scenarios, technology, and robotic scenarios. And now, Joy, would you like to talk a little bit about your experience participating in our studies?

Joey:
Yeah, I’m excited. And thank you very much for inviting me for the conference. Thank you very much. I’m Joy. And I’m an educator from an Ugandan school called Bunamari Budusa Maritan, which is founded, of course, in Uganda, in the rural setting. It has a total number of like 200 students who are in the age bracket of 5 to 18 years old. Most of the students live close to the school, and their parents are generally like citizens. The greatest benefit from being involved in the project has been the exposure to my students. And the project has enabled our students to participate and have hands-on experience that enhances their understanding and interest in technology and other cultures. It was their first time for them to talk to children like in Japan and other countries. That really was a great experience for them. Additionally, a great bonus was language learning, whereby the students were able to engage in interactive practices. And they received artistic feedback on their language skills. You could find that they learned how to express themselves in Swahili and English. What we thank a lot, like the session were well-planned and would really capture our students’ attention. And it had to increase the engagement, the session that we all had during the activities we were handling. What I feel like, in my opinion, what I heard was the project really enabled the social and emotional learning, whereby the development of the social skills, the consideration of emotional intelligence, feeling the compassion for the peers in Japan. They really enjoyed and they learned about the Japanese culture and the school in all.

Moderator:
Thank you so much, Joy. And if you want to tell us a little bit about possible challenges that you faced while you were participating in our studies. And we didn’t have, of course, we didn’t have the opportunity to have a robot at the school there. So this is something that was not, I mean, we are in very initial phases where we do ethnography. So probably this will be in the future. But already we had some other interactions and discussions with Joy. So would you like to tell us a little bit the challenges that you faced, even with the technology, the simple technology that we used during our project?

Joey:
Thank you, Vicky. In my opinion, the major obstacle was the limited resources we had at the local level, both in Uganda and the school being at the local setup. Gudu Samaritan is a local setup that has a budget constraint, making it difficult to invest in technology. And also, we found that the internet connection was not all that stable, like they were used to witness with fear. And it really made the work to, you know, participating online sessions was very hard to catch up with the timing. Another issue we had was to do with the curriculum integration, whereby we feel like there should be a need to engage the minister of education back in Uganda to integrate the project so that there is additional resources, the time, the adjustment to teaching methods.

Moderator:
Thank you, Joy. And what is your vision for the future? What would you like to have for the future in the context of this project?

Joey:
Thank you. Like, the most important aspect for us is the funding of such projects. First, the government should provide the infrastructure for a stable internet connection for all. This is like a basic need for the integration of technology in the school. And you have to find that you find a school like Gudu Samaritan. There is no power. There is no internet connection. What we were only using, like, one phone, maybe one laptop, which was very hard. So in case there is that funding, it will help to ease the connection of the internet to the children. We also need to, like, the resources and the necessary materials, like the intelligence system, the robot, the computer equipment, as in the schools. Like, you find that Japan, you know, the children would feel like their adult students had computers. So this way, like, our students will have equal access to information, like how we do it in Japan. For the future, we envision, like, our schools have not only the necessary technology, such as computers and robots for the students, but also trained teachers. We feel AOL literacy is important for all students and teachers. We hope that all the educators have the opportunity to participate, like, on those online workshops and training, to feel confident about technology in their everyday teaching. Like, Vickie, as you understand, our participation in this project was a great opportunity to our students. And we hope that at least, not only at the beginning, how we started it, but we will continue with this exciting project to grow up and excel. Thank you very much.

Moderator:
Thank you, Joy. It was a great pleasure it has been to work with Joy and the school. And thank you very much for your intervention today. Great. So now we can, I don’t know if Judith is around. Judith, you’re here. Great. So I would like to invite Judith. So as Stephen said beforehand, this was one, I mean, our project is one of the eight case studies where we tried to implement some of the guidelines from UNICEF. Today we want also to take a taste from another case study. So Judith, I need to read your short bio because it’s super rich. So welcome to this session, first of all. Judith is a technology evangelist and business psychologist with experience working in Africa, Asia, and Europe. In 2016, she set up IMISI3D, a creation lab in Lagos focused on building the African ecosystem for extended reality technologies. She’s a fellow of the World Economic Forum. And she’s affiliated with the Graduate School of Harvard, School of Education. So the floor is yours. Judith.

Judith :
Thank you very much, Vicky. Good afternoon, everybody. What a pleasure.

Joey

Speech speed

174 words per minute

Speech length

791 words

Speech time

273 secs

Judith

Speech speed

143 words per minute

Speech length

14 words

Speech time

6 secs

Moderator

Speech speed

146 words per minute

Speech length

1361 words

Speech time

559 secs

Randy Gomez

Speech speed

130 words per minute

Speech length

467 words

Speech time

215 secs

Ruyuma Yasutake

Speech speed

129 words per minute

Speech length

223 words

Speech time

104 secs

Steven Boslow

Speech speed

162 words per minute

Speech length

917 words

Speech time

340 secs

Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Chris Jones

Geopolitical discussions should focus on areas of agreement rather than disagreement to foster cooperation and prevent conflicts. This approach aligns with SDG 16: Peace, Justice and Strong Institutions. Breaking down large tasks into smaller manageable ones, advocated by engineer Chris Jones, promotes effective problem-solving and resource allocation, in line with SDG 9: Industry, Innovation and Infrastructure. A positive stance towards international cooperation and addressing challenges through understanding and managing smaller components is supported, aligning with SDG 17: Partnerships for the Goals. Large organizations may need to make changes to become more agile and adapt to emerging technologies, a principle aligned with SDG 9. Governance discussions should consider both shared values and technical requirements, as highlighted by SDG 16. The process of governance is equally important as the final product, as demonstrated by the UK’s online harms legislation. Multi-stakeholder governance, involving diverse expertise and perspectives, is crucial, echoing SDG 17. The airline industry’s success in implementing common standards serves as an example of a bottom-up approach aligned with SDG 9. These approaches, emphasizing collaboration, agility, inclusive governance, and bottom-up solutions, contribute to sustainable development, peace, and justice.

Sheetal Kumar

The analysis examines the perspectives surrounding future technologies and their impact on marginalized groups, as well as the governance and development of these technologies.

One argument put forward is that future technology developments may not necessarily bring positive impacts, particularly for marginalized groups. New technologies like quantum-related developments, metaverse platforms, nanotech, and human-machine interfaces can be complex and intimidating, making it difficult for already marginalized individuals to access and benefit from them. This highlights the potential for further exacerbation of inequalities if technology is not developed and implemented in an inclusive manner.

On the other hand, there is a strong emphasis on the importance of inclusive technology development and governance. The argument asserts that the development and governance of technology should be more inclusive, particularly in relation to marginalized groups. This approach recognizes the need for diverse perspectives and experiences to be considered to avoid further marginalisation and ensure equitable access to technological advancements.

Furthermore, the analysis suggests that governments and industry stakeholders should prioritise engaging in multistakeholder discussions related to technology developments. Examples such as the IGF Best Practice Forum on Cybersecurity and the policy network on internet fragmentation are cited as instances of successful multistakeholder dialogue. This underscores the significance of collaboration and cooperation among various stakeholders to ensure that technological advancements are beneficial and meet the needs of all.

In terms of future-proofing, an important observation is that high-tech solutions are not the only way to achieve this. While future technologies are often associated with cutting-edge advancements, it is important to recognise that future-proofing can also involve other approaches that do not solely rely on high-tech solutions.

Another noteworthy perspective is the advocacy for connecting multilateral spaces through people and not solely through novel technology. The analysis highlights the need to improve and enhance existing spaces where work is being done, making them more diverse, inclusive, and connected. By prioritising diversity and inclusivity in these spaces, stakeholders can foster collaboration, coordination, and cooperation, ultimately leading to more effective and equitable outcomes.

The analysis also praises the United Nations’ Internet Governance Forum (IGF) as an open, inclusive deliberative space that plays a crucial role in discussing and shaping technology governance. It emphasises the significance of preserving and enhancing spaces like the IGF, which offer unique opportunities for stakeholders to come together, exchange ideas, and collaboratively address the challenges associated with technology governance.

Additionally, transparency, engagement, and the preservation of user autonomy are considered fundamental principles that should be upheld in technology governance. The analysis argues that good governance principles, which are already known, should be applied to new technologies. This includes timely and clear information sharing that is accessible to a wide range of individuals, ensuring transparency and meaningful engagement.

Another notable point is the integration of high-level principles, specifically the international human rights framework, in guiding the use of technologies. The analysis highlights that technologies like AI and data impact various aspects of life and suggests that the international human rights framework can be embedded throughout the technology supply chain through standards. This approach promotes a rights-respecting world where everyone benefits and ensures that the development and usage of technology uphold human rights.

In conclusion, the analysis presents various perspectives on the impact and governance of future technologies. It highlights the importance of inclusive technology development, multistakeholder engagement, connecting multilateral spaces through people, and embedding high-level principles such as the international human rights framework. By considering these perspectives and incorporating them into technology governance, it is possible to strive towards a more equitable and beneficial technological future.

Gallia Daor

Intergovernmental organisations, such as the Organisation for Economic Co-operation and Development (OECD), have demonstrated their ability to be agile while maintaining a thorough and evidence-based approach. The OECD’s AI principles were adopted in an impressive one-year time frame, making it the fastest process ever at the organisation. This highlights the organisation’s ability to adapt to the rapidly evolving landscape of emerging technologies.

To facilitate global dialogue on emerging technologies, the OECD established the Global Forum on Technology. This platform provides an avenue for stakeholders from different countries and sectors to come together and discuss the challenges and opportunities presented by these new technologies. This engagement ensures that decisions made by intergovernmental organisations are well-informed and incorporate perspectives from various stakeholders.

The importance of multi-stakeholder and interdisciplinary engagement in decision-making within intergovernmental organisations is evident through the OECD’s network of AI experts. With more than 400 experts from different stakeholder communities, the OECD is able to tap into a wide range of expertise and perspectives. This inclusivity ensures that the decisions made by the organisation are comprehensive and representative of diverse viewpoints.

Recognising the need to keep pace with emerging technologies, intergovernmental organisations like the OECD have established dedicated working groups that focus on different sectors. These working groups, such as those on compute, climate, and AI future, allow for a deeper understanding of the specific challenges and opportunities posed by each sector. By focusing on these emerging technology sectors, intergovernmental organisations can proactively address the unique issues that arise within each area.

High-level principles, such as trustworthiness, responsibility, accountability, inclusiveness, and alignment with human rights, are considered important and relevant for all technologies. Intergovernmental organisations aspire to develop technologies that are trustworthy, responsible, and inclusive, while also being aligned with human rights. It is essential to factor in potential risks to human rights and ensure accountability in the development processes of these technologies.

However, there is often a gap between these high-level principles and their actual implementation in specific technologies. Variations exist between technologies, and the importance of certain issues like data bias may be specific to AI. This calls for a careful examination and consideration of these factors during the governance processes of emerging technologies.

To address the complexity and differing requirements of different technologies, there may be a need to break up the governance processes into smaller components. By doing so, intergovernmental organisations can accommodate the varying expertise and process requirements associated with different technologies. This approach ensures that governance structures are tailored to the specific needs of each technology, promoting more effective decision-making and implementation.

In conclusion, intergovernmental organisations have shown their ability to be agile, adaptable, and evidence-based in the face of emerging technologies. The OECD’s fast adoption of AI principles and the establishment of the Global Forum on Technology exemplify their commitment to staying at the forefront of technological advancements. The inclusive and interdisciplinary approach to decision-making, along with the focus on specific technology sectors, further enhances the effectiveness of intergovernmental organisations in addressing the challenges and harnessing the opportunities presented by emerging technologies.

Carolina Aguirre

The analysis considered various perspectives on technological development and governance. The speakers emphasised the need to maintain openness in both processes, drawing parallels with the Internet Governance Forum (IGF), which has nearly 20 years of experience in dealing with open technology. They highlighted that the IGF’s bottom-up approach plays a vital role in achieving openness.

The growing influence of the private sector in shaping technological developments was recognised as an important aspect. The speakers noted that many new technological advancements are being driven and progressed by private companies. This recognition indicates the need to understand the limits and the actors shaping technology ecosystems.

There was concern that new technologies are being developed behind closed doors, deviating from the open nature of the Internet’s original development. The speakers argued that such closed development is less open by nature. This observation raises questions about transparency and inclusivity in the creation of new technologies.

The speakers universally agreed that technology is not neutral and is influenced by societal values. This recognition signals the importance of considering the ethical and social implications of technological advancements. The broader impact on society must be a critical consideration in technological development and decision-making.

The adequacy of existing institutions in the face of challenges posed by globalisation and technological development was called into question. One speaker, Carolina Aguirre, expressed scepticism about the sufficiency of the institutions currently in place. The analysis revealed a need for institutions to adapt and keep up with the rapid changes brought about by technological progress.

Furthermore, the analysis highlighted the decline of globalisation in terms of trade and international dialogue. This observation suggests that traditional processes concerning internationalisation are struggling to keep pace with technological advancements.

In conclusion, the analysis presented a multi-faceted view on technological development and governance. The speakers stressed the importance of openness, raised concerns about closed development, highlighted the influence of the private sector, and acknowledged the influence of societal values on technology. Additionally, the analysis pointed out the challenges faced by existing institutions and the decline of globalisation. These insights shed light on the need for continuous evaluation and adaptation in the realms of technology and governance.

Thomas Schneider

The analysis highlights several key points regarding disruptive technologies, global digital governance, and the regulation of artificial intelligence (AI). Firstly, it emphasizes the need for a change in approach towards disruptive technologies. As technologies continue to develop rapidly, with increasing complexity, it is important to adopt a more distant perspective to effectively regulate them. The analysis suggests that machines and algorithms can play a crucial role in developing regulations for disruptive technologies, taking into account their unique characteristics and potential impact.

In terms of governance, the analysis asserts that collaboration is a better approach than conflict. It argues that leaders have been losing sight of the notion of cooperation, which is crucial for achieving sustainable and effective global digital governance. Collaboration is believed to promote a better working environment and foster long-term solutions to complex challenges.

Moreover, the analysis delves into the regulation of AI. It argues that human beings are relatively stable over time, which necessitates the adaptation of regulations surrounding AI. The historical reactions to new technologies, including fear of job loss and ignorance of technology’s potential, are cited to highlight the need for a balanced and adaptable regulatory framework.

The analysis also highlights the importance of building a network of norms in response to advancements in AI. It emphasizes the need for different levels of harmonization depending on the context and argues that institutional arrangements should adapt to technological innovations to effectively govern AI.

Additionally, the analysis makes an interesting observation about the notion of a multi-stakeholder approach. It suggests that this concept is here to stay and proposes that with technology dematerializing, rule-making should also dematerialize. This means that decisions should be made based on stakeholder involvement rather than geographical boundaries, indicating a shift towards a more inclusive and participatory governance model.

In conclusion, the analysis brings attention to the need for a change in approach towards disruptive technologies, the importance of collaboration over conflict in global digital governance, the need to adapt regulation of AI in response to human stability, the necessity of building a network of norms to govern AI advances, and the significance of the multi-stakeholder approach in dematerializing rule-making. These insights provide valuable considerations for policymakers and organizations looking to navigate the complex landscape of disruptive technologies and governance in the digital age.

Alลพbฤ›ta Krausovรก

The convergence of technologies has become a cause for concern as it raises ethical and privacy issues. The development of human brain interfaces is particularly problematic as it intrudes on the privacy of our minds. This invasion into individuals’ innermost thoughts and feelings is seen as a major problem, raising questions about personal autonomy and the protection of mental privacy.

Additionally, there is a growing recognition of the importance of defining our future world. As technology continues to advance rapidly, it is crucial to establish clear guidelines and regulations to ensure its safe and ethical use. This includes operationalizing our current ethical principles in new and unfamiliar situations that arise with technological advancements. By applying our existing ethical frameworks to emerging technologies, we can address the ethical challenges they present and ensure they align with our values and principles.

Furthermore, it is argued that considering case-by-case scenarios is necessary when making decisions about the use of artificial intelligence (AI) and other advanced technologies. While general principles and guidelines guide our ethical considerations, it is important to take into account the specific context and circumstances surrounding each situation. This approach enables us to address the unique ethical dilemmas that may arise and make more nuanced and informed decisions.

Moreover, valuing cultural understanding and emotional connections is emphasized as a means to reduce inequalities and foster positive interpersonal relations. Recognizing the diversity of cultures and perspectives in our global society can help bridge gaps and promote empathy and understanding among individuals from different backgrounds. Striving for understanding beyond a rational level, including emotional understanding, is seen as crucial for building inclusive and harmonious societies.

In conclusion, the convergence of technologies presents complex ethical challenges that necessitate attention. Defining our future world, operationalizing our principles, considering case-by-case scenarios, and valuing cultural understanding and emotional connections are key aspects that stakeholders should address. By doing so, they can navigate the ethical landscape in a way that promotes fairness, inclusivity, and respect for individual privacy.

Cedric Sabbah

Cedric Sabbah, an expert in international governance, identifies the challenges posed by the rapid development of technology and its frequent disruption for global governance. He observes that periodically, a new technology becomes a major concern for the international community. These concerns have evolved from critical infrastructure to IoT, ransomware, and internet governance. Emerging issues, such as jurisdictions, content moderation, and encryption, have also come to the forefront.

Sabbah highlights the ever-changing nature of the global tech industry, emphasizing that international organizations cannot afford to be complacent. He suggests that an agile and bottom-up approach could assist in addressing the governance challenges posed by technology. Sabbah believes that as technology constantly evolves, policies need to be regularly revisited and updated. Incorporating domestic bottom-up principles into international governance may bring value in tackling these challenges.

Furthermore, Sabbah emphasizes the importance of future-proof and flexible global tech governance. He proposes an approach that can adapt to the changing technological landscape while maintaining long-lasting effectiveness. Sabbah also recognizes the potential of multi-stakeholder processes and bottom-up approaches in enhancing the quality of global governance mechanisms. He advocates for involving non-traditional stakeholders in discussions and encourages the development of rules by specialized networks.

However, the existence of numerous international bodies and initiatives addressing similar topics raises concerns about fragmentation within these organizations. This fragmentation includes bodies within the UN as well as external entities like ITU, UNESCO, Human Rights Council, WIPO, OECD, COE, and the EU. It prompts the question of whether fragmentation is advantageous, allowing for diverse efforts, or a disadvantage that diminishes focus and resources.

In conclusion, there is a need to reassess existing concepts and explore new approaches to effectively govern emerging technologies. Sabbah’s insights underscore the significance of an agile and bottom-up approach, as well as the potential value of multi-stakeholder processes in addressing technology governance challenges. The concern regarding possible fragmentation within international organizations calls for thorough examination and coordination of processes to ensure effective resource allocation. Overall, global governance mechanisms must adapt and evolve in response to the rapidly changing technology landscape.

Session transcript

Cedric Sabbah:
Sedgwick, Shomael? Hi. Hi. Yeah? You guys hear me? Yes, we can hear you. Awesome. Is now a good time to start? So we are about to start. Okay. I’m watching a game of musical chairs. Hi, Sedgwick. Yeah, I hope so. Hi, Sedgwick, I think you can start. Okay, awesome. Do you guys see the PowerPoint? Yes. Okay, awesome. Okay, so, hi, everyone. My name is Sedgwick Saba. I’m Director for Emerging Technologies at the Office of the Deputy Attorney General for International Law at Israel’s Ministry of Justice. I apologize for not being here in person. My colleagues and I had to cancel our flight at the last minute due to the difficult situation here in Israel. The events taking place here are very sad, and it’s difficult for me to proceed as if everything’s a-okay, because it’s not. However, I do believe that the topic today is important. And thanks to the support of the panelists and other friends, I’ll do my best to make it as interesting as possible. So let’s get straight into it. In this afternoon’s panel, we’re going to go on a kind of a sci-fi policy adventure. I’m going to ask all of you, our panelists in particular, to project yourselves in, let’s say, IGF 2030. Maybe it’s taking place on a gigantic international space station somewhere. And you’re trying to figure out how the international community should deal with this new thing that’s happening in technology, whether it might be quantum sensing, quantum computing, quantum communications, human-machine interface, immersive technologies. And we’ll ask our panelists now how they envision the international community dealing with these issues that could arise in the future. So as you all know, technology develops rapidly. We’re seeing disruptions every year. Every few years, we’re seeing things. And those of us who follow the technology, we see it happening incrementally. But there’s usually like a tipping point where the international community focuses on the next big issue and decides this is what we need to deal with, only to be replaced by another issue a few years later. So just looking back in the days of, you know, when we started with cyber, so everybody was talking about critical infrastructure, and then it was IoT, and now it’s ransomware and Internet governance. In the past, I remember having a lot of discussions about jurisdiction and then content moderation. And now we’re talking about, you know, decrypting, companies providing assistance to decrypt child sexual exploitation material. For AI, no sooner than we were talking about high-risk AI and, you know, we had in mind biometrics and discrimination, all of a sudden, generative AI becomes the thing we’re talking about. So this is the known challenge of how law and policy play catch up to technology, and maybe it can’t really ever catch up. Everything is highly dynamic. And there’s never a point at which international organizations can just say, you know, we can pack our bags now. Our work here is done. It’s always evolving. And one specific issue I’d like to explore today is whether an agile and bottom-up approach can help international institutions deal with these challenges. I’m thrilled to introduce to you an absolutely all-star cast. So we have online Carolina Aguirre, a professor at the Universidad Catรณlica del Uruguay in the Department of Humanities and also a former member of the UNESCO Expert Working Group on AI. We have Galia Daur, a policy analyst at the OECD who coordinates the activities of CDEP. We have Sheetal Kumar, head of the Engagement and Advocacy at Global Partners Digital. Dr. Osveta Krasova online, who’s head of the Center for Innovation and Cyber Law Research at the Institute of State and Law in the Czech Academy of Sciences. And Chris Jones, Director of Technology and Analysis Director at the UK Foreign Commonwealth and Development Office. And of course, Ambassador Schneider, Thomas Schneider, who’s Ambassador and Director of International Affairs at the Swiss Federal Office of Communications in the Federal Department of the Environment, Transport, Energy and Communications. And to me, he’s Chairperson extraordinaire at CHI in the Council of Europe. So the structure of this session will be as follows. We’ll divide it into three parts. I’ll try to finish talking soon so we can give the floor to the panellists. First, we’ll talk about the challenges of international governance that are presented by the next wave of disruptive technologies and maybe looking at the past of AI and Internet governance to see what we can learn. Then we’ll explore whether principles of agile governance, and in particular, bottom-up principles that we know from domestic policy, can be sort of internationalized and harnessed to deal with global tech governance. And lastly, we’ll try to identify some common principles that can be long-lasting and future-proof to enable a certain degree of institutional agility without losing sight of the important things. For each of these topics, I’ll ask one or two panellists to share their thoughts, and then the other panellists can chime in. And then Elzebeta, towards the end, will provide some concluding remarks and observations, and hopefully we’ll have some time for Q&A. One disclaimer, what I’m going to say is my own personal views, not necessarily the views of the government of Israel. Now, before we start, just a second. Before we start, and just to change things up a little bit, the panel includes a challenge for you, the audience, in person and online, and also for the panellists. So I’ve asked the panellists to pick a few songs and artists that they like. You see them on the right, and the names of the panellists are on the left. And I also picked a song, and I selected from these the songs that connect with our panel today, and also I used Bing’s image creator to generate some really nice images that are inspired by the song titles. The challenge for all of you is to try and guess who picked which song, and all the speakers, including me, will be including a small clue in the presentation to help you figure it out. And you can give your answers to me in the Zoom chat or to any one of the panellists. I was planning on giving the winner some kind of small prize, but obviously I can’t right now, so I’ll try to keep that as a rain check for next year’s IGF or some other way. So now that all these explanations are out of the way, let’s get right into it. So let’s start talking about the challenge. So I’ll address first Tomas and Karolina. So the challenge for international organisations. So the first question is what lessons can we learn from our experience with Internet governance and AI governance in order to address the next wave of disruptive technologies? Specifically, what do you think should be the role of international bodies in addressing global digital governance challenge? I’ll paraphrase something that I heard a few days ago from my friend David Fairchild in another session. Many of the international bodies right now are, you could say, analogue bodies, asking them to deal with problems of a digital world. And also, if you can briefly address what I think is an elephant in the room, which is geopolitics that have a major role to play in shaping the debates. For example, ITU discussions on Internet governance, difficulties in making progress in the UN ad hoc committee on cybercrime. So can we really have a meaningful discussion on desirable and implementable global policy goals in light of geopolitics? So we’ll start with Tomas and then Karolina.

Thomas Schneider:
Okay, sometimes it helps to turn devices on. It’s a pity that you’re not here, but of course we do understand this. But I hope to see you again soon in Strasbourg, actually. Yeah, I think it’s a nice setting because it tries to be a little bit more forward-looking than other sessions. And hopefully a little bit, let’s say, also inspiring in a different way. Well, the challenges are, let’s say, substance-based and then there are geopolitical challenges. And this doesn’t go just for intergovernmental organizations. It actually goes for all those that are somehow dealing with policy and with rulemaking. Maybe I have to start with this is a crucial moment in history and things will be completely different tomorrow than they have been yesterday because this is what you hear throughout history ever since. Speeches are recorded. Every person thinks that that particular moment in time is the moment where everything will change. And it’s true. Everything changes every day, but it’s also there’s recurring patterns in human behavior, not just in physics, but also in human behavior. So, to cut the long story short, I think, but nevertheless, we have an extremely fast development of technologies, of growing complexity, of being less material, which has effects compared to technologies that used to be material-based because you couldn’t copy them so quickly. You couldn’t move them so quickly. You cannot apply them remotely. You cannot use a car remotely while being in another continent, for instance, and so on and so forth. So, there are many similarities with previous disruptive technologies in the way that humans reacted to it, in the way they were regulated. The disruptiveness of the new technologies, I think, are of a different nature that has implications. And it forces us as rulemakers or us as society to adapt. But I’m not sure whether we have to adapt in a sense that we have also learned to think quicker and calculate quicker in our brains. That may be difficult. So, we have to actually probably change the way at which we look at things. We may have to look at things a little bit more, again, like maybe with the Greeks and the Romans, from a little bit more of a distance and say, okay, what are the big developments? And trying to understand them. And then maybe use machines and use algorithms to develop regulation and develop concepts to cope with algorithms because our brains may not be able to compute the nitty-gritty details also with regulation for this. And, for instance, to give you an example, we have parliamentarians now in Switzerland that use ChatGPT to formulate parliamentarian interventions and requests. And we are not yet allowed, but we are waiting for the moment where we decide because it takes resources to answer these requests. And the more we get, the more resources we need. And an efficiency gain for us would be if we could also write the reports that are supposed to reply to the parliamentarian interventions with ChatGPT. So, in the end, you have two machines talking to each other and we can both go on holidays, the parliamentarians and the administration. I think that’s something to think about in the end. But now, to be serious, we need to find ways to become more agile, more dynamic, without becoming stressed. So, we are going in the wrong way if we try to do things quicker. We have to do things differently as human beings in general, but also as rule makers. So, we need to use the new tools to face the challenges that the new tools create. Otherwise, I think it won’t work. Don’t ask me how. I’m not a technician. Maybe Vint and others know. But at least on the concept level, I think we need to find a different approach. And just two words to the geopolitical environment. And this is something that, as somebody who has been in this since the WSIS, since 2003, in that period, we were all still in the hope of the end of history with the fall of the Berlin Wall, with Nelson Mandela, with people with charisma, avoiding wars, creating peace, bringing people together. And we were hoping that the new technologies would bring us together, would strengthen the rules-based international order based on shared values. Unfortunately, we somehow have lost the track. And in particular, the leaders, be it dictators or be it leaders that have been elected by more or less democratic processes, are losing track of this notion of cooperating is better than fighting against each other. And I just hope, I’m also a historian, that we don’t need to go to really ugly wars in order to realize that cooperation is better than fighting each other. But for the time being, it seems a little at least unsure how we deal with this. And then, of course, technologies are not just new tools to do good things, but also to do bad things. And I’m not a prophet, so I will not go into detail. But I think we should realize and we should work together with people that realize that working together is actually sustainable. It’s also more fun. It doesn’t just create less harm. It’s actually also more fun than working against each other. Because if that’s not the case, no intergovernmental institution or multistakeholder institution works because it’s all built on a notion of we cooperate together. So you can’t blame the ITU or the UN for not producing results if those that are shaping it, i.e. the member states or the stakeholders in multistakeholder institutions, are not willing to cooperate. So this is just a few thoughts of mine. Thank you.

Cedric Sabbah:
Carolina, you’re up next. Can you hear us?

Carolina Aguirre:
Yes, thank you. So to address these questions and following on Thomas’s intervention, so I do think that we have nearly 20 years of experience on our back with dealing with an open technology as the Internet and then with AI governance as an emerging challenge, global challenge, but that also is spread out very much everywhere. I do think that we still need to make strong efforts in keeping up the momentum on spaces and processes that achieve some kind of, in a way, what the IGF does in terms of its openness and bottom-up spaces. And we are seeing that kind of reflection around some of the AI governance developments, which look positively at spaces such as the IGF and some of the Internet governance approaches that have been taken over the last nearly two decades. We do need to sort of try to understand the limits and the actors that are shaping these ecosystems. So in that respect, I do believe that keeping up this effort despite maybe the less positive and maybe less vibrant sometimes mood that we may have towards these processes is very, very relevant in line with what Thomas was mentioning concerning cooperation as well, with trying to get to some kind of mutual understanding. I also think that trying to get to the idea of working together is also related with the third part of this intervention, the question, the prompt that you raised, Cedric, concerning the geopolitics, because we are in a different time and moment concerning globalization. So geopolitics today is unfolding as it did unfold differently in the early 2000s or late 90s. And now those states are certainly extremely important. I mean, so many of these new technological developments as in the past, they are also being shaped and taken forward by the private sector. And so when we talk about geopolitics and address technological changes and technological momentum, I mean, we do have to also address the elephant in the room on how to sort of work and define the scope and space for action for this private sector that has an increased power. And we are seeing that kind of momentum also shaping how we address and have concerns on how some of these new technologies are being sort of developed behind closed walls and are much less open by nature in terms of what the Internet originally was and still is. And finally, as a final observation, I mean, when we think about the developments of these technologies, including the Internet, I mean, technology is never neutral. Technology is never non-reliant on societal values. So we do have to keep that in mind when thinking about developing international processes around these new technologies. Thank you.

Cedric Sabbah:
OK, thanks, Karolina. I want to give a bit of the opportunity to other panelists to just chime in. It almost seems like hearing from both of you, Tomas and Karolina, I’m grossly oversimplifying, but it’s almost like you’re saying we’re OK. We the institutions we have are in place. The world is what it is. And we’ll just have to deal. Karolina, you’re not agreeing. So I misunderstood. Could you could you just refine what I’m saying?

Carolina Aguirre:
I’m certainly not saying that we are OK. I do think that we do have some interesting foundations, but that the challenges ahead are enormous and particularly because we are not as keen as Tomas, I think, as I understood him, correct me if I’m wrong, was stating that we are in a different moment in terms of how we address global cooperation as one of the angles to address globalization. Globalization is in decline in many respects concerning trade, concerning international dialogue. So, I do think that it is indeed an extremely challenging moment and maybe probably most of the processes that we are seeing concerning internationalization are really not up to the challenges that we face with the development of these technologies.

Cedric Sabbah:
OK. I’d like to give a few moments for Chris or anyone in the room if you want to relate to what you just heard. I think that’s a prompt, isn’t it, Cedric?

Chris Jones:
I think you want me to say something. So, first of all, Cedric. Not specifically you. So, I will. I’m delighted to be here. And, Cedric, I’m sorry you can’t be with us here personally, but I’m really happy to see you safe, albeit on a screen. So, you know, best of luck with everything that’s going on. I agree with what both of my co-panelists have just said. Geopolitics is a messy business, particularly right now. But I think there’s an opportunity here to focus on the areas where we agree, not on the areas where we disagree. And too often, and I’m sort of stealing my remarks from later, too often I feel we start with too big a picture. So, we try to do too much in one go. I’m an engineer, and my natural tendency is to break things into the smallest possible component I can because I’ve got a very small brain. And that means I can understand them. I can fix them. I can make them work. And I think there’s some parallels here for how we work in our multilateral and international organizations in helping address some of these challenges.

Cedric Sabbah:
Okay. I think there’s a lot to unpack in everything. But we’ll have the opportunity to continue to delve in. So, I’d like to go now in a little bit of the uncharted territory. We heard in a few panels in the last few days the idea of agile governance and sandboxes in domestic regulation in order to smartly regulate AI. And what I want to ask is whether this idea can be useful for global governance as well. Are international organizations capable of being agile? Or is this concept maybe completely antithetical to the way they’re meant to operate? When we talk about bottom-up regulation, the underlying idea generally is that rather than top-down where you have like a central institution that promotes and implements processes for its constituents, in bottom-up we empower the constituents to deal with the issues based on their concrete needs from the ground. We see the good in everyone’s contribution. So, can bottom-up and multi-stakeholder processes contribute to the quality of global governance mechanisms? And if so, how? Practical examples of bottom-up approaches to consider and I invite you to address any one of these or all of these or maybe something else. One example that’s already done to a large extent by the OECD is fostering policy experimentation by allowing exchanges of views. So, setting up a tech policy lab for international information sharing. Another one is actually fostering the experimentation by states by allowing for a space in which states can maybe succeed and fail in certain examples and then learning collectively from the successes and failures. Another one is maybe integrating in the bottom-up approach, integrating other stakeholders that maybe are not traditionally in the conversation. One example that comes to mind from our experience with AI in Israel is small and medium enterprises. And also maybe encouraging rule-making by specialized networks. So, instead of having, for example, the large generalist organizations that deal with the big issues, having networks of, for example, privacy regulators or cybersecurity regulators or AI regulators in the future to deal with things on their own. So, I’ll ask Galia and Chris, I’m turning to you as well again. I think each of you have unique viewpoints that you can share, so I’ll ask you to go first.

Chris Jones:
Yeah, thank you. So, I’ll go first just because I’ve been asked to. So, first of all, I’m interested in these songs and I really hope people in the audience are doing better than I am, because I have no clue. But when Cedric first suggested it, I thought Ambassador Schneider was actually going to play them all, which would be amazing. Look, I think it’s a little bit of a loaded question being at an event organized by a large international organization about whether they can be agile, because I think that could be quite a dangerous place to go. But I do think they can. I do think large organizations can be agile, but not in the way that we’re currently organized and the way that we operate. So, I think there are some parallels we can take from agile software development, where we define small chunks of activity and we work out how we define those. We don’t define the order in which we deliver them, we just define what they are. And the plan is always to get better. So, to incrementally deliver more, rather than trying to deliver everything in one go. And I think there’s a parallel there for how we work internationally. That’s what we’re trying to do with the UK’s hosted AI Safety Summit. So, we can’t do all of AI. So, on the 3rd of November, AI is not going to be solved. But what we can do is focus on a very narrow slice and get some broad international agreement. And I think there’s something we can do there. The second thing I wanted to talk about was different types of governance. And I think we always tend to focus on values first. We try to agree what are the values we want to see. And this, I think, comes to the geopolitics. I don’t think we will ever agree on a common set of values. Different countries are different countries for a reason. We have different national identities, we have different things that are important to us. And we have to embrace that diversity. But that doesn’t mean there aren’t some common values we can agree. So, I think we absolutely should focus on that. But there’s another type of governance, the technical governance. So, the things that we need to have in place in order to be able to interoperate, to talk, to work together. And I think it’s often easier to focus on those because we can get the engineers to be focusing on the really practical details of what does it take. I think there’s a difference between how and what. And I think very often we focus on the what, whereas what’s really important is the how. And I’ll give you the example of the UK’s online harms legislation. So, that has taken us six years and we’re nearly there. But even when we get there, you could never pick that legislation up and give it to another country, just wouldn’t work. But what would work is the process of how we got there. So, there’s some key things that you need to do to be able to develop that type of legislation. You need to define what constitutes a vulnerable group. There’ll be some common themes. So, children, I think everybody agrees that children are a vulnerable group. But different minority groups will be different in different countries. So, sharing the process, the how, I think is important for bringing these things together. Cedric, you talked about multi-stakeholderism. I think that is critical. I think all governance needs to be multi-stakeholder because nobody has all the answers. So, governments certainly don’t have the technical expertise. Technology companies don’t have the legislative expertise. And none of those really understands the impact on citizens and the civil societies organizations have. I think the IGF is a great example of how you bring that multi-stakeholder organization together. I mean, look at the range of organizations here. So, whether you’re the boss of a telecoms company or whether you’re a Ministry of Foreign Affairs official like me, they couldn’t be more different. But we’re all here talking about common issues. And then finally, I just wanted to talk about, Cedric, you wanted an example of bottom-up and where this has worked. And I really like the example of the airline industry where there’s a need to work together and agree common standards. You know, we needed to fly planes from one country to another. So, we needed a way to share data, a way to build planes that could fly into different territory. And that really forced people from a bottom-up perspective to work together. And I wonder what the parallel might be for artificial intelligence or quantum or dare I say, human rights. So, thank you. I’ll hand over to my colleagues.

Gallia Daor:
Thank you, Cedric. I don’t know if you wanna respond to that first or… No, go ahead, Gaia. So, thanks for this. I do love how all your examples are. I’m an engineer. So, I like to break things into little bits. So, I’m a lawyer. So, I like process. So, I think, but I think that that is part of the, will be really part of my answer because I think, yes, it’s very common to think that intergovernmental organizations can’t do that, that, you know, what’s agility got to do with like anything like intergovernmental organizations. But I think partly it’s by design because if we want to be, and I’m speaking from the perspective of an intergovernmental organization, if we want to be accountable to our members, if we want to be transparent, if we want to have multi-stakeholder consultations, if we want to be evidence-based, if we want to be thorough, it’s hard to also be fast. So, we, and if we want to maintain the credibility, that would mean that stakeholders actually want to come and engage with us because stakeholders have limited capacity and limited time. And they would only come if the conversation’s worth it. Then I think we also need to make sure that we uphold these standards. Nonetheless, the world is changing and things are happening. And in particular in the technology area, things are happening very fast. So, we can’t just stick to the way that we did things 60 years ago, for example, when the OECD was established. And I think, you know, Cedric, you mentioned sort of playing catch up with technology or sort of trying to be more anticipatory and sort of more planning ahead. And I think we’re moving there. And I think I can give sort of a couple of examples from the OECD’s perspective of, I think where we’ve tried to both be agile and to have this multi-stakeholder bottom-up approach. And so, one example that you mentioned briefly earlier is the OECD AI principles that were adopted in 2019 and were the first intergovernmental standard on AI. So, I think that’s one thing to say about that is it was the fastest process ever at the OECD to develop a recommendation. So, we did that basically in one year, which sounds like a lot, but really isn’t for something that’s so complex. And obviously, it builds on a lot of work that had been done before, but the process itself was remarkably fast and nonetheless was also absolutely multi-stakeholder and interdisciplinary. And I don’t think we would have gotten there. I’m sure we would not have gotten there without that kind of engagement that was essential. Also on the AI front, then we are now as part of the work to support countries and organizations in implementing these principles. So, we have a very extensive network of AI experts with more than 400 experts from different stakeholder communities from different countries. And that actually helps us. It sounds like it’s a big machinery, but it actually helps us move fast. And I think it’s a really helpful model because we can, like Chris said, we can break it up to little bits and little working groups that sort of focus on different aspects. And we can also adjust. So, we started with one set of working groups, but we’ve evolved them. So, we now have a group that focuses on compute, which isn’t something that we didn’t work on at first. We have a group that focuses on climate. We have a group that focuses on AI future, which is sort of a generative plus, plus what we might see coming ahead. So, I think that’s sort of perhaps one example. And then beyond AI, AI has taken up a lot of space in the discussions that I’ve been in over the last couple of days. So, beyond AI, sort of looking at emerging technologies and also looking at my colleague, Elizabeth here. We’ve, at the OECD, we created the Global Forum on Technology about a year ago with a lot of support from the UK, but really as a global venue for dialogue on emerging technologies and sort of anticipating and preparing for the opportunities and challenges that they might bring. And I was looking at Elizabeth because she’s actually leading this project. And it’s both sort of multi-stakeholder by design, but it also lets us sort of try to move relatively quickly on these different technologies. For example, quantum, for example, immersive technology. So, I don’t know if that’s not to say everything is perfect to your question, but I think there are ways to try to address some of these by design challenges and how international organizations are built.

Cedric Sabbah:
Okay, thanks, Gana. Here too, I’d like to invite maybe Sheetal, who hasn’t spoken yet, as well as Thomas, Kaolina, Asbetta, whoever would like to just add in their two cents on this agility question. Can you? Yes, okay, great.

Sheetal Kumar:
Thank you first for having me here. Let me start with looking at the, I was looking at the session description and all of the technologies that are listed there. The emergence of new tech, like quantum-related developments, metaverse platforms, nanotech, humans, machine interface. And it all sounds like going to a theme park and maybe having a great time. But actually, for a lot of people, this future could be a very difficult one. People who are already marginalized women, it’s not necessarily going to be a good future just because the technology is different or faster or more complex. So, as I think Kaolina was saying, technology is never neutral. And so, what we can do about that is ensure that the development of it and indeed the governance of it is more inclusive. So, we can’t predict the future. I don’t think any of us would claim to do that. But what I think I can say with some certainty is there’s going to be 24 hours in every day in the future unless something changes. So, that’s really a point around resourcing, right? So, if we have 24 hours a day, we sleep for about eight hours ideally. The rest of the time, what do we do with it? We’ll work, try and shape this world that we’re in. And what I would say is that there are spaces already where we’re doing that work and they can be improved. As I think Chris was saying, we can work with what we have and make things better incrementally. What does that mean for multi-stakeholder spaces where these discussions are happening? I think improving those, making them more open, where standards are being developed, making those more diverse, strengthening the IGF, for example, and connecting the discussions that happen here with the discussions that happen elsewhere in multilateral spaces. So, to give an example from the IGF, because we’re here, and I presume we all care about the IGF, that’s why we’re here, and I’ve been involved in some of the intersessional discussions at the IGF. And what I think is a good example is, for example, the Best Practice Forum on Cybersecurity. It’s an okay example, I’m actually going to say, because I think it could have been better. We are having discussions at the, well, the UN is having discussions about how to ensure that states behave responsibly in cyberspace. They’ve developed norms, they are continuing these discussions. How to implement them has been an ongoing one, and so the Best Practice Forum over the last few years has been taking the norms and analyzing cyber incidents, big, large cyber incidents that we’re all familiar with, and assessing how those have impacted people, like first responders and the people on the ground to inform the implementation of that. These are multi-stakeholder working groups or intersessionals, and we have had governments and others involved, particularly with the policy network on internet fragmentation, actually. It would be great, I think, if governments and industry and other stakeholders and civil society prioritized having, in their portfolios, time to engage with these forums and to bring their foreback, because we have to connect these spaces through people. We don’t have to connect them with some novel technology with what they’re doing elsewhere, and that way we can strengthen and empower, I think, our spaces to be more diverse and more inclusive. That also goes to opening up multilateral spaces, more through consultations, through engagement, and through modalities that really allow for meaningful inclusion. So final point, then, I guess, is that future-proofing doesn’t have to be high-tech. It can actually be quite basic. It can actually be quite simple. Of course, I’m not saying not using generative AI to help you with your reports from us wouldn’t be a good idea, but it doesn’t always have to be that way, and I think we have some basic things that we haven’t done that we need to do better, and those are some examples which I hope help. Thank you.

Cedric Sabbah:
Does anybody else want to say something about this concept of agility? I’m not seeing anyone. So, okay, we’ll try to package everything a little bit later. So now, oh, before I move to the next slide, I was pointed out to the person who chose the song from Rage Against the Machine. I won’t disclose who it is. I made a mistake in the title of the song. The song is called Take the Power Back, so I’ll have to change the image later, but anyway, so keep that in mind. We’re moving now to the next, I guess, the final theme for today. So I think it makes sense to say, and you’ve all kind of hinted at these concepts before, I think, Sheetal and also Galia, all of you who’ve spoken about multistakeholderism, sorry. So I think it makes sense to say that agile governance, if it’s this kind of theme that we’re trying to enshrine in the way international organizations work, it doesn’t operate in an absolute vacuum. So there should be, I guess you could say, maybe like a subtle line between agility and maybe anarchy between experimentation and free-for-all. So the question I think that begs to be asked is, are there any kind of like universal principles of global tech governance that should be kind of promoted across the board? Here in the image, I connected the song Born to Run by Bruce Springsteen because it includes the line, I want to guard your dreams and visions, which I think is a nice metaphor for the idea of a responsible innovation. So we have all these common buzzwords that have served us well, I think, so far in internet governance and AI governance. So buzzwords like multistakeholderism, interoperability, human rights that apply offline, apply online, trustworthy, human-centric. So do you think these concepts remain relevant for all other technologies, such as immersive technology, human-machine interface, all the quantums? Or do you think maybe they all apply, but they apply differently? Or do you think we might need to… up with new concepts and frameworks that enable us to grapple with the new challenges. Also, a lot of the issues are cross-cutting, so when we talk about, you know, we don’t want fragmentation, but we actually see a fragmentation of processes within the UN. There’s the ITU, UNESCO, the Human Rights Council, WIPO, and then outside of the UN we have, you know, the OECD, COE, the EU, of course, which is a major player, and then we have topic-specific initiatives like GPA, like the AI Safety Summit that Chris mentioned earlier. So is this fragmentation of efforts, is this, in your opinion, a feature or a bug? So I’d like to ask Sheetal first to address this question. Any universal principles? Should we be aiming for fragmentation, allowing for fragmentation? What do you think?

Sheetal Kumar:
Thank you for those questions. I think there’s something semantic sometimes when we talk about this topic, and fragmentation, if it’s diversity, then great. If it’s also, for example, normative efforts that are all aligning and reinforcing common principles, then great. If it’s duplication, and as I said, we have limited resources if we’re going to different places trying to do the same thing, but spending our time actually developing different frameworks that are competing, then no, it’s not. And there is a risk of that if we don’t coordinate and collaborate on some of these emerging issues. There is a lot, as I think we heard earlier, around AI at the moment happening on how to govern that, but at least we have, and this is, I know, something that people have felt fatigued about at this IGF, at least we have a space where we’re coming together and we are hearing about what everyone else is doing. We can try and make those connections and ensure these deliberative spaces, ensure the decision-making spaces are inclusive. So not necessarily, I guess is my answer to you, Cedric, it’s not necessarily a bad thing to have various processes at play as long as they coordinate and they’re inclusive. And I also just wanted to point out earlier what I mentioned about the importance of connecting what is an open and inclusive deliberative space like the UN’s IGF, which is so unique because we also need to remember that the IGF is not this annual event, it is the intersessionals, it is the hundreds of national and regional IGFs that happen every year and that provide these spaces for people to come together and very unique in that way. This is something that we need to preserve and so if we try and create something else that is exactly like that, then that is a problem. But the leadership panel, which I know we have a member of here, it is very important to create these connections with those who can then take on messages and connect to other spaces. So I think that’s what’s really important that we need to ensure that when we’re governing these new technologies and building the processes for them that they’re truly inclusive by design. We have endless tools and ways to do that, we know how to do it, we need to do it. And I would say that, as I said, it’s kind of old governance or old tech for new tech perhaps. It’s not that complex to ensure that information is shared in a timely manner, that information is clear, that it can be accessed by a range of different people, that they’re invited to the table. And of course new technologies can also be deployed to support that. So hopefully we can turn our minds, I think, to actually operationalizing what we already have and use good examples as those we’ve heard from before to ensure that when we’re confronted with these new challenges, the principles, you asked me about principles, the principles of transparency and engagement, of openness, of maintaining user and people’s autonomy, and of preserving openness, all of those are enshrined and preserved as we face the new challenges that we are.

Cedric Sabbah:
Maybe I’ll, does somebody else want to take the mic?

Gallia Daor:
Hang on, yeah, sorry. Yeah, no, I was just sort of as Sheetal was speaking and also sort of to your questions, I think one of the things that also at the OECD, but I’m sure in other places that we’ve been thinking about is really this sort of the gap between what is like the fairly high-level principles. You asked, Cedric, do we think that trustworthy and responsible and whatever are relevant? So I think yes, I mean, absolutely. And I think they are relevant to, I don’t know about all technologies in the world, but I think in principle, yes. We want them to be trustworthy. We want them to be responsible or the development to be responsible. We want accountability. We want that the process will be inclusive. And I think obviously we want sort of alignment with human rights where there’s the potential of risk of human rights, to human rights. And I think that’s also to Chris’s point earlier. I think that is the, these are the core values that I think we have to, like we already did, so we have to agree on. And so I think yes, at the high level, but then the question is, okay, what do you do with that? And that’s where I think sometimes there would be differences between technologies because, I don’t know, we had an AI discussion earlier and one of the points that was raised really is about the data and how important the data is in the context of AI and how issues of sort of data that’s not representative and bias. And so these are things that are perhaps specific to AI, might not be the case with a different technology. But so we need to be aware that when you implement the high-level principles to a specific technology, that’s where you’d have the difference. And I think that’s related to the governance question because I think that’s where perhaps you would split or you’d break up things into little bits because that’s where you really need the expertise and that’s where you might need to have processes happening in different places. So just the thought, I don’t know.

Sheetal Kumar:
Could I just add something very quickly on that? It’s, I think, yeah, exactly what was said about the need to integrate, I think, these high-level principles in various ways. We are now seeing that all these technologies that we’re using are impacting so many aspects of our lives in a way that makes, I think, it requires us to turn to what we have agreed on. And what we have agreed on is the international human rights framework. So that is a ready-made, a ready-agreed framework that we can embed throughout the supply chain of these technologies through the standards. And there are means and there are tools to do that. And so I think that’s also very important. Sorry to plug my session tomorrow, but the OHCHR is co-hosting with us a session tomorrow on their report on technical standard setting and human rights. So it is really, it’s an opportunity, I think, as these technologies evolve to ensure that we build them so that we have a rights-respecting world where everyone benefits from them. And in that sense, it’s quite, it is an exciting theme park then, I think. If I may hook in, Cedric, this is Thomas.

Thomas Schneider:
Something that always strikes me is when you talk about how does this need to evolve, is that while technologies involving probably institutions will somehow follow, human beings themselves are fairly stable over a longer period in time in the way they function. And if you take, and I’m often comparing engines as something that has differences, but it has many similarities in the way it’s disruptive, like AI. Engines were put in machines that were either moving something from A to B much faster than men or horses or whatever, or cows, or they were put in machines that were producing something, be it food, be it goods, whatever. And it’s similar to AI that are used to either generate content or put content new together, or to replace not physical human labor, but cognitive human labor. There’s less animals that you can replace because animals seem to have less cognitive capabilities, but so it’s manpower, cognitive manpower. And if you look at the reactions, this is the point I’m trying to make. If you look at the reactions of people to engines being used in different contexts. In Switzerland, near Zurich where I live, in 1833, a group of home weavers and small and medium enterprise weavers were burning down a textile factory after the government has decided not to ban factories like this to emerge, which is what they demanded. They just burned it down because they were afraid of losing their jobs. And actually some of them lost their jobs. Of course, then history has shown there’s actually more new jobs are created through industrialization than jobs are killed. So the fear of losing the job is something that we’ve had. Then ignorance is another thing. The last German, Kaiser Wilhelm II, he used to say somewhere in the early 20th century, I don’t believe in the automobile that has no future. I trust in the horse. Well, so and there are people that say to say, well, this is not really going to change much and so on. Everything will stay the same. Not necessarily. And then the other one is, again, those that banned things in Graubรผnden, which is the region in Switzerland that has touristic places like the Warth and St. Moritz and others. The government banned cars from the whole territory of the region in 1900. And only 25 years later, 25 years later in 1925, they allowed cars through a popular vote because the people thought like, well, actually we want to use them. And then the question is where the people or this was the government in Graubรผnden more environmentally friendly or whatever? Probably not. Maybe that the horse tourism industry, whatever there was, was just better organized in that region that made them ban cars for 25 years. So we have the same reactions to new technologies and we’ll probably have the same reactions in building a network of norms, be technical, legal, but also cultural norms in how to use not engines, but AI in different contexts with different levels of harmonization. In the airline business, you have a much higher harmonization than in the car infrastructure, in rules on cars, but you do have technical and legal and also cultural ways of dealing with ways of organizing stuff. And the same will happen with AI. And then the same needs to happen also with the institutional arrangements on how to take these decisions. And Wolfgang Kleinrecht and others have already used the frame 10 years ago, like we are trying to solve the problems of the 21st century with the institutional arrangements that we’ve made in the 19th century, which actually many countries coincide with industrial revolution, that you had kings and kaisers and not really democratic systems only. And then more or less in line with industrial revolution, you had the introduction of parliaments, of division of power with legislative, executive, and the court system. So also there, there is some influence on technology, not just on the daily lives, but also on the institutional setting. And the notion of multi-stakeholder, I don’t think it will go away because we will have to organize ourselves differently now that the technology is dematerializing. Maybe the rules making should also dematerialize from purely physical, I live in this country now, so the rule is made in this country for this country. Because if also people move around and if everything moves around, the physical fixing of rules just because you happen to be somewhere, or even worse, you happen to be born somewhere and have that citizenship, and you can only decide about the rules where you’ve been born and not where you actually live, may not make so much sense. So we may have to develop a new way of division of power, not among geographical political borders, but maybe in a more sophisticated stakeholder-based or situation-based or whatever, voluntary group-based schemes that are more representative of the people than classical 19th century parliaments. Thank you.

Cedric Sabbah:
Thank you so much, Tomas. It’s amazing to me how sometimes to think about the future, it’s helpful to look at history. So I would like to now turn to my dear friend Osbeth Izum to try and package this for us. We don’t have a lot of time left, so I think we’ll skip the Q&A. So, Osbeth, you’ve been attentively listening to our panelists. I know you’ve been involved with human-machine interface in the past and now, of course, AI. Can you share with us, in your view, some takeaways, some overarching thoughts, action items, areas for future research that you think we should be focusing on? Over to you.

Alลพbฤ›ta Krausovรก:
Thank you, Cedric. And thank you for organizing the panel despite the situation. And my heart goes to Israel. So let me now share quickly, because we don’t have much time, my observations. I made thorough notes. And I have to say that, actually, all the panelists went so nicely to follow up on each other. So the key messages I will try to summarize from each of them. From Tomasz, the disruption is too big now. And we need to change the way that we look at things, which resonates with me very much. And I will tell at the end of my speech why. Karolina, she said that we really need to define the scope of action right now. And also that the private sector is increasing power, which we need to focus on. Chris focused on finding the common values and sharing the how, which I think all of those are very nice action points. And Gania, actually, she spoke about the importance of multidisciplinarity and involving stakeholders. Sheetal spoke about diversity and also about the importance of space where we are coming together and operationalizing what we already have. I think those are nice action points that are actually coming together very nicely. And they respond to the questions you had about disruption, agility, and common principles. Now, to my personal observations, I think that the convergence of technologies that we are facing now, that’s what Tomasz mentioned in the beginning, is the biggest problem. That’s something that we really need to focus on. And in my personal opinion, we are kind of crossing the border because with the technologies like human brain, brain computer interfaces, when we are able to peek inside of a human brain, we really are able to cross the physical border of a human body and intrude the privacy of our minds, connected with AI, read the mind, and actually even influence the people. We really need to ask the main question now, which is, what world do we want to live in? Because that is crucial. We need to define where we are headed. And it’s the place where international organizations to steer the development, to steer it in a way that is thoroughly discussed. And yes, there is this cost of time where we really need to focus on, we really need to go deep, and we need to give it the time and attention and the thought to see where we want to go. We need to agree on how we are going to operationalize the principles that we are already having. I think that the principles, they need to be implemented to new situations, as it was already mentioned. We do have the common values like human life and physical and mental integrity. This is something we need to consider in new ways and see what does it mean in different scenarios. That’s why also the bottom-up approach is very important, because we need to see case by case what is happening and not just think about theoretically what might happen. We need to see what is happening and react as quickly as possible while balancing it with a thorough discussion. And as you said that we should suggest some parts of the song in our final speech, I would like to say that I feel like walking the world, which means for me we should get to know each other better and better and better, and not understand each other just in a rational way, but also in an emotional way, which means we should not meet at one place, we should travel, we should see each other, and we should understand each other on the human level, the complete package. Maybe this is just too much general of an observation, but this is my position. Thank you.

Cedric Sabbah:
So much, Elzbeth. This session for me, I wasn’t thinking of this, but it kind of occurred to me as we were going along. I love the idea of kind of like taking something, breaking it up, like deconstructing and then reconstructing. And I think there was this kind of this recurring concept here of like, yeah, a lot of these principles and concepts, we want them, but we might have to rethink how we do certain things. That’s not to say, you know, an absolute revolution is necessary, but kind of just like recalibrating so that we can adapt better for the future. So thanks so much, Elzbeth. I know the time is up, and I would just love this session to continue for another few hours and just hear what everybody has to say. But unfortunately, we have to stop now. I think I speak for everyone here in saying we learned a lot. I want to thank the panelists, especially Carolina, who’s, I think it’s quite early in the morning for you. Your interventions, I think they provide the foundations for some kind of follow-up, so maybe next year’s IGF or something else. Last thing, if you were attentive and you think you can guess who picked which song, let us know. Thanks, everybody in the audience in Kyoto and also virtually on the Zoom and on YouTube, and enjoy the last day of the IGF. Thanks so much. Thank you, Cedric, and all the best. Thank you, everyone. Thank you. Thank you.

Alลพbฤ›ta Krausovรก

Speech speed

149 words per minute

Speech length

681 words

Speech time

275 secs

Carolina Aguirre

Speech speed

120 words per minute

Speech length

625 words

Speech time

312 secs

Cedric Sabbah

Speech speed

163 words per minute

Speech length

2936 words

Speech time

1081 secs

Chris Jones

Speech speed

209 words per minute

Speech length

1148 words

Speech time

330 secs

Gallia Daor

Speech speed

182 words per minute

Speech length

1299 words

Speech time

428 secs

Sheetal Kumar

Speech speed

187 words per minute

Speech length

1568 words

Speech time

502 secs

Thomas Schneider

Speech speed

172 words per minute

Speech length

1931 words

Speech time

673 secs

Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Audience

The analysis uncovers several significant points concerning gender equality and cybersecurity policies. One notable issue is the exclusion of women, girls, and individuals of other genders from discussions with the private sector and tech companies. This exclusion leads to a lack of diversity and representation in decision-making processes, potentially resulting in policies that do not adequately address the needs and concerns of all individuals.

Another concerning finding is the resistance to including gender language in the final text of policies. This pushback may arise from factors such as a resistance to change, a lack of understanding of the importance of gender-inclusive language, or intentional efforts to maintain the status quo. This resistance highlights the need for greater awareness and commitment to gender equality in policy-making processes.

On a positive note, the analysis recognizes the essential role of including a gender perspective and intersectionality in cybersecurity policies. By considering the experiences and challenges faced by different genders and intersecting identities, policies can be more comprehensive and effective in addressing cyber threats. This recognition emphasizes the importance of adopting an intersectional approach when developing cybersecurity strategies.

Furthermore, civil society and the United Nations are identified as key actors in ensuring gender-inclusive policies. Their involvement in advocating for and monitoring the implementation of gender equality measures can contribute to creating an environment that values and promotes the rights and representation of all genders.

Another noteworthy insight is the recognition that gender equality is a task that requires collective support, not just from women. It is important for everyone, regardless of gender, to actively contribute to achieving gender equality and dismantling gender-based discrimination and inequality.

Education is highlighted as a crucial tool for combating setbacks in gender equality. By promoting education that emphasizes gender equality principles and human rights, societies can foster greater understanding, empathy, and equal opportunities for all individuals.

However, limitations arise during negotiations, as member states often draw red lines that restrict progress on gender language. This observation suggests that political considerations and differing priorities among states can serve as obstacles to advancing gender equality within policy frameworks.

Additionally, the analysis emphasizes the need for a gender framework for digital transformation and cybersecurity. This framework should account for the specific challenges and vulnerabilities faced by different genders in the digital realm, ensuring that cybersecurity policies and practices are inclusive and responsive to diverse needs.

In conclusion, the analysis brings attention to several key aspects of gender equality and cybersecurity policies. It highlights the need for increased diversity and inclusive decision-making processes, the importance of gender-sensitive language, the role of education in promoting gender equality, and the significance of international cooperation and civil society engagement. These insights can inform policymakers, stakeholders, and advocates working towards gender-inclusive cybersecurity policies and contribute to building a more equitable and secure digital future.

Speaker 1

The analysis underscores the critical need for cybersecurity awareness among citizens and businesses. Policymakers should actively support collaboration between different sectors to effectively address this issue. By fostering cooperation and sharing knowledge, policymakers can enhance cybersecurity practices and protect individuals and organizations from cyber threats.

Furthermore, it is crucial for policymakers to take the lead in creating awareness about cybersecurity among citizens and businesses. They can educate the public about potential risks and promote best practices for safeguarding personal and sensitive data. This proactive approach can contribute to an overall improvement in cybersecurity measures and reduce the likelihood of successful cyber attacks.

The analysis also highlights the importance of respecting human rights within the domain of cybersecurity. Policymakers should integrate human rights as a fundamental principle when formulating cybersecurity policies. It is vital to remember that real people are affected by cyber threats, and their rights and privacy should be protected. By considering human rights, policymakers can strike a balance between ensuring cybersecurity and upholding individual freedoms.

Additionally, the analysis underscores the importance of balancing innovation with securing the digital infrastructure. Many young people are involved in both positive and negative innovations in the cyber domain. Policymakers need to find a middle ground that encourages and supports innovation while ensuring the security of digital infrastructure. This balance is essential for fostering technological advancements while safeguarding against potential vulnerabilities and cyber threats.

The analysis also emphasizes the significance of including vulnerable populations in policy considerations. Often, vulnerable populations are overlooked or ignored when it comes to cybersecurity policies, resulting in their problems being disregarded. By actively including these populations in policy discussions and decision-making processes, policymakers can address their unique needs and challenges. This inclusive approach helps ensure that the concerns and vulnerabilities of all individuals are taken into account in cybersecurity strategies and initiatives.

In conclusion, the analysis highlights the importance of cybersecurity awareness, collaboration, and human rights considerations in policymaking. Policymakers play a vital role in creating awareness, fostering cooperation, and protecting human rights in the realm of cybersecurity. Moreover, finding a balance between innovation and security, as well as actively including vulnerable populations, are instrumental in developing comprehensive and effective cybersecurity policies. By considering these factors, policymakers can enhance cybersecurity practices, promote a safer online environment, and work towards achieving the relevant Sustainable Development Goals.

Veronica Ferrari

Various speakers have emphasized the importance of including a gender perspective in cybersecurity discussions. Gender is not only a technical issue; it involves power relations and encompasses differentiated risks and needs experienced by individuals. The recognition that cyber incidents disproportionately harm specific social groups based on factors such as gender, sexual orientation, race, and religion is growing. There is also evidence that legal cyber frameworks are being exploited to persecute women and LGBTQ individuals.

To promote a gender-inclusive approach to cybersecurity, there have been calls to integrate a gender perspective at national, regional, and international levels. The Association for Progressive Communications (APC) has developed a specific tool/framework to achieve this goal.

Concerns were specifically raised about cyber laws in the Asia-Pacific region, where shrinking civic space and challenges to civil society inputs were highlighted. It was noted that cyber-related laws can be used for censorship and criminalization, with specific issues concerning the Philippines.

Additionally, there was a discussion on the gender perspective of cybercrime legislation and the strategies employed. Jess and her organization have conducted research and advocated for gender perspectives in cyber policy discussions. Veronica Ferrari showed interest in gaining insights into the gender perspective of cybercrime legislation from Jess.

The international dynamics of gender and cybersecurity were also examined. The appearance of gender considerations in multilateral processes on cybersecurity was addressed, with David providing his views on the important factors to consider for a gender perspective at the international level.

In order to link a human-centered approach to existing agendas such as sustainable development and digital economy indicators, recommendations were made within a gender framework. This highlights the importance of aligning cybersecurity with broader goals and keeping a focus on human well-being.

Veronica Ferrari agreed on the significance of continued advocacy, research, and raising awareness about a human-centered approach while rethinking the concept of security. This emphasizes the need to push for gender inclusion in cybersecurity, generate more evidence, and promote a shift in security perceptions.

In conclusion, integrating a gender perspective into cybersecurity discussions is vital. Recognizing and addressing differentiated risks and needs, the disproportionate impact of cyber incidents on different social groups, and the misuse of legal frameworks are crucial steps towards establishing a more inclusive and equitable approach to cybersecurity.

Kemly Camacho

The analysis delves into various aspects of cybersecurity strategies and the involvement of different stakeholders in promoting gender equality. One key point highlighted is the significance of budget allocation in cybersecurity strategies. For instance, the discussion brings up Costa Rica’s cybersecurity strategy, which primarily focuses on reacting to cyber incidents rather than proactive prevention. This indicates that budget allocation plays a crucial role in defining the government’s vision and priorities, including whether gender is prioritised in the strategy.

Another significant aspect discussed is the role of civil society and training in cybersecurity. Sula Batsรบ, an organisation, is mentioned for convening a network of organisations across different fields to advocate for cybersecurity. They also conducted a comprehensive six-month training programme aimed at educating various sectors about the importance of cybersecurity. This evidence underscores the positive impact civil society and training can have in enhancing cybersecurity measures.

A mixed sentiment is observed regarding the private sector-led push to include more women in cybersecurity. While the intention appears to encourage gender equality, there is concern that this push may be driven by the private sector’s need to address resource gaps, rather than a genuine commitment to gender equality. This highlights the importance of ensuring that motivations for gender inclusion are rooted in equality and not solely economic interests.

The analysis also advocates for greater women’s leadership in the IT and cybersecurity sector. It highlights the stagnant percentage of women in the Latin American IT sector, which has remained unchanged for the past 15 years despite investments and efforts. The unique qualities and analytical leadership that women can bring to the sector are recognised as valuable contributions.

Furthermore, the analysis emphasises the need for safe digital spaces, drawing a parallel with the concept of safe neighbourhoods. It suggests that just as people require a safe physical environment, they also need a safe digital space. While the initial idea of integrating women in the IT sector is viewed positively, it is argued that more needs to be done to ensure genuine inclusivity.

Additionally, the analysis draws attention to the violence faced by women in the IT sector, framing it as a form of violence against women. It highlights that the challenges experienced by women in the sector are often not integrated into conversations around violence against women. The existence of extensive research on the difficult conditions faced by women in IT further supports this assertion.

Overall, the analysis sheds light on various dimensions of cybersecurity strategies, the importance of stakeholder involvement, and the need for gender equality. It provides evidence and insights into the factors that influence cybersecurity strategies, the role of civil society and training, private sector motivations, women’s representation in the sector, the need for safe digital spaces, and the recognition of violence against women in the IT field. These findings offer valuable considerations for policymakers, organisations, and individuals seeking to promote cybersecurity and gender equality.

Speaker 2

The cybercrime law in the Philippines has faced significant criticism due to its potential threat to the rights of women and LGBTQ+ individuals. One of the main concerns stems from the broad parameters and nebulous key terms surrounding the provision about cybersex, which is seen as a potentially serious threat to these marginalized groups. Additionally, the law also criminalises cyber libel, further limiting freedom of expression and raising concerns about possible misuse by authorities.

Another issue with the cybercrime law is the imposition of excessive penalties for crimes involving the use of Information and Communication Technologies (ICTs). These penalties may not be proportionate to the offences committed and can lead to unfair and disproportionate punishments.

However, there has been positive development in recent times. The problematic provision regarding cybersex in the cybercrime law has been repealed. This significant change is the result of years of advocacy by women’s rights groups that tirelessly worked towards addressing the flaws in the legislation. The repeal was enacted through a provision under new legislation addressing online sexual abuse and exploitation of children, demonstrating a shift towards a more comprehensive approach to protecting vulnerable individuals online.

The success of repealing the problematic provision highlights the importance of collaboration and building alliances to effect changes in flawed cybersecurity policies. Women’s rights groups, children’s rights groups, and LGBTQ+ groups came together to advocate for the repeal. Their concerted efforts, along with the support of a champion in the Philippine Senate who is open to dialogue with civil society, have been crucial in achieving this positive outcome.

Overall, while the cybercrime law in the Philippines still has its flaws, the recent repeal of the problematic provision about cybersex is a significant step towards addressing concerns about gender and human rights. It underscores the power of advocacy and collaboration in bringing about meaningful changes in policy. The journey, however, does not end here, and continued efforts are needed to ensure that cybersecurity policies align with international standards and protect the rights of all individuals in the digital realm.

David Fairchild

The analysis of David’s remarks sheds light on several important points concerning gender inclusion in cybersecurity and international policy. David underscored the significance of multilateral processes in advancing this cause. He noted that Canada has consistently supported gender issues as a crucial component of their foreign aid policy, reflecting the country’s commitment to promoting gender equality on the global stage. However, David also expressed concerns about the potential negative consequences of overemphasizing gender. He cautioned against an excessive focus on gender, highlighting the strategic disadvantages that can arise from such an approach.

In addition to advocating for multilateral processes, David highlighted the importance of education and understanding in addressing gender issues within technical fields. Specifically, he referenced the International Telecommunications Union, emphasizing the need to ensure that gender equality and understanding are prioritized in highly technical areas, where human rights may not always receive sufficient attention. David further emphasized that gender equality should not be viewed solely as a women’s issue, but rather as an issue that requires the support and involvement of everyone.

The analysis also revealed David’s observations on the ongoing debates and pushbacks surrounding gender language, even within progressive platforms like the UN. He cited an unnamed state’s call to end the integration of gender-related language in UN documents, demonstrating the challenges faced in promoting gender inclusion. Moreover, David noted that some countries or blocs may use gender language as a bargaining chip during negotiations, further complicating the progress towards gender equality.

In conclusion, David’s remarks emphasized the crucial role of multilateral processes in promoting gender inclusion in cybersecurity and international policy. While commending Canada’s ongoing support for gender issues, he warned against the negative effects of overemphasizing gender. David stressed the need for education and understanding regarding gender issues in technical fields, highlighting the International Telecommunications Union as an example. Furthermore, he highlighted the ongoing debates and pushbacks surrounding gender language, underscoring the challenges faced in advancing gender equality. The analysis revealed both positive and negative sentiments expressed by David, reflecting the complexity and ongoing nature of these important issues.

Session transcript

Veronica Ferrari:
We are a small group, but we are a small group, and we are very happy to be here. We are a small group, but we are a small group, and we are very happy to be here. I’m the advocacy coordinator at the Association for Progressive Communications. I invite those who are in the room and want to come a bit closer, that’s fine. We are a small group. So the idea is to have a conversation and to hear from you also. So quickly, for those who don’t know about me, I’m the social and environmental justice and interceptions within digital technologies. And in today’s session, we are going to discuss, as you may know, about gender perspectives in cybersecurity. Specifically cyber security policy. So we all know that traditionally cyber security debates were mainly centered on national security, the security of systems. But in recent years, we are seeing an increased focus on national security, and we are also seeing an increased focus on international security, and we are also seeing an increase about the need for human rights-based approach to cybersecurity, which is an approach that places humans at the center, since they are the ones impacted by cyber threats, cyber operations. And additionally, we see more and more recognition in international, regional, and national spaces about the fact that different social groups are in different parts of the world in the internet use each of these platforms, comprising governments and national security servers, using international names for different campaigns in shutdowns , and research by the Association for Progressive Communications and others have sun that cyber incidents disproportionately impact and harm individuals and groups in society on the basis of the gentlemen, but also their sexual orientation, their gender identity or expression, but also because of race, religion, and gender identity. So, we have been documenting and producing research that shows that around the world, legal cyber frameworks are being used to silence and persecute women, LGBTQ people, for their activism, their gender expression, or simply because of expressing dissent. What do we mean by a gender approach to cyber security? How we can integrate such a perspective in debates at the national, regional, and international levels? So, we have a lot of questions about this, and we have a lot of questions about how we can integrate this perspective, and also, it would be great to discuss what issues this agenda should focus on in the future. So, for this, we have great speakers here that will be sharing examples of how cyber security directly affects the lives of women and diversities in different regions of the world. They will tell us also what is the status of the integration of gender in cybersecurity, as well as what would be the biggest challenges that we need to face incredibly economically. So, quick intros for our great panel. Our first panellist is Kemli Camaccio, and is the co-founder and current general Next, we have Grace Kitaga, CEO and convener of the Kenya ICT Action Network, KICTANet, which is a multi-stakeholder platform for people and institutions interested and involved in ICT policy and regulation. Also joining is Jasmine Pazis from the Foundation for Media Alternatives, where she works on issues related to privacy data and cybersecurity. And finally, we also welcome David Ferchil, First Secretary at the Department of Mission of Canada. David focuses on digital policy and cybersecurity and represents Canada, for example, at the UN Open-Ended Working Group on ICTs. So, again, thank you all for being here. So we plan to have a round of interventions from our speakers, and then the idea is to open the floor up to questions. So, before we dive into the discussion, I quickly wanted to provide like a background of APC’s thinking on gender and cybersecurity, and also a bit more about the specific tools that we have. So, first of all, thank you all for being here. I’m very happy to be here. I’m very happy to be here. I’m very happy to be here. I’m happy to be here. I’m happy to be here. And also a bit more about the specific tool we have developed that we have here, these brochures. We have copies in English and Spanish, and you can use the QR code here to download it. So, firstly, for us, it’s important to note that a gender approach to cybersecurity is not only a women’s issue, that gender goes beyond that, and gender is about power relations. The idea also that cybersecurity is not only a technical issue, that technological and also policy solutions can actually contribute or be used to mitigate discrimination and inequalities in societies. So, for APC, a gender approach to cybersecurity is about understanding and addressing differentiated risks and also needs faced by complex subjects. So, for us, cybersecurity is not only a gender approach to cybersecurity. Cybersecurity should be explicitly intersectional, so it should take into account gender, but together with other intersections and factors that compose our identities, such as race, ethnicity, religion, class. So, cybersecurity is actually responsive to the diversity of We have a lot of work to do, and we have a lot of work to do to make sure that we are able to implement the best security priorities, the perceptions, and practices of different groups and people. Our approach also recognises that we are all active subjects that have agency in the process of creating a more secure environment online for everyone, and questions and works to overcome one of the main challenges regarding the security of the internet. We have a lot of work to do, and we have a lot of work to do to make sure that we are able to implement the best security priorities, the perceptions, and practices of different groups and people, and questions and works to overcome one of the main challenges regarding the security of the internet. So, all in all, this perspective means that in every step of the design, implementation, and evaluation of cyber security measures and policies, the goal should be to positively impact the greatest number of people in all of the diversity and diversity of the world, and to make sure that we are able to implement the best security priorities, the perceptions, and practices of different groups and people, and to make sure that the security of systems, as well as human rights, are affected and weakened. So, the framework that I mentioned before, just a quick about that, so, basically, from our research and an initial mapping that we conducted at the APC, we found it difficult to find references to gender and gender-specific policies, and we found that there were not enough references to gender-specific policies, and we found that there were not enough technical recommendations or guidance on how to incorporate such a perspective into cyber policy. So, because of that, we believe it was key to offer a reflection on why it is important to include a gender approach to cyber, but also guidance on how to do it. So, in collaboration with cyber security and gender specialist activists, and also policy makers, we developed this framework, and we think that this framework could be a useful tool for civil society when working at the national level, engaging in regional discussions, and also in the international level, and we also think that this could also feed the discussions happening, for example, at the UN on cyber security. So, basically, we want to help and support different audiences and groups in different ways. So, this framework is made up of an overview, it’s a document that combines norms, standards, and practices, and it’s a document that is used to understand the role of gender in cyber security, and it’s a document that is used to understand the role of gender in cyber security, from human rights council resolutions, to ITU guidelines, report of UN processes. We have another document that maps the existing research addressing gender and cyber security that is still scarce, but has been growing in the last years, and also, an assessment tool that provides the practical recommendations to develop this for different audiences and in different ways. Also thinking about the international organizations and the regional organizations that are the ones that provide advice for the development of cyber security strategies. So basically this framework has been designed as a starting point. We acknowledge that the recommendations are general and we need to adapt them to local and national context. This is why we have been organizing regional conversations with civil society, with policymakers, in regional IGFs to socialize and also enrich this framework and we are discussing it now with the IGF community. So I will stop here. So we would like to hear from our speakers. Kimberly, if that’s okay, I will start with you. So Sula Batu has a lot of experience engaging in cyber policy in Costa Rica but also in Central America. So I wanted to ask you what are in your view the main issues that a gender perspective on cyber security should consider in the region and also what do you think is the status of the integration of a gender perspective in cyber security policy in Costa Rica and broader. So yeah, I love to hear your thoughts on this. I can pass you this mic. Okay, you have it there. Thank you, Kimberly.

Kemly Camacho:
Thank you. Thank you, Veronica. Thank you for the invitation. Thank you, everybody, for being here. This is, I think, a really key discussion. I decided to go to the very practical aspects based in the experience that we have had now for since 2018 working in integrating gender in policies in general and the cyber security strategy in Costa Rica. I’m going to try to reflect a little bit and get some of the good practices and the lesson learned also. Just very fast, Sula Batu, we participate in Costa Rica, we have a policy of gender science and technology which is big framework to work in these issues. And we participate very actively in the building of this policy, and then later we were designated to elaborate the monitoring and evaluation framework for the policy. And now, this year, we are designated to begin, when I return, to develop the action plan for the policy of science, gender, of science, technology and gender in Costa Rica, yes? And also, we are part of the national committee as representative of the civil society organization, the National Committee for the Cybersecurity Strategy. And one thing that, first, here, my first thing is, I don’t know in all your countries we have this strategy, mandatory in Costa Rica, you have to have a committee, a multi-stakeholder committee to develop and to follow up the policies and the strategies. Then we are part of this committee. The first thing that I have to say is budget. Budget, where the budget is allocated. I think this is, to be honest, the possibility to do or not to do things, yes? And to define which is the vision of one government, yes? About what you are going to prioritize, and if gender is prioritized in the strategy, is the budget. Then this is something that we, one of the lessons learned, or the thing that we wanted to share. As Veronica said before, when we began, we have two moments in Costa Rica. Costa Rica was hacked as a country in 2020, exactly after the pandemic. Yes, we were hacked totally as a country, yes. Health data, banking data, everything was hacked as a country. Then there is a before and after for the hacking, yes. And also, at the same time of the hacking, we got an authoritarian government, yes. And the other was more open. And we have hacked and have an authoritarian government, OK? Then I wanted to say first that the cybersecurity strategy was totally, at the beginning, totally oriented to attend, to take care of the attacks. That is the cybersecurity policy. I imagine in many of your countries is the same, yes. Nothing more than that. And all the budget was related to react to the cyber. Even with that, we were hacked as a country totally, OK? But when we were hacked, something very important is because of the country was not prepared, they asked the country, they asked to the private sector to be in charge of the cybersecurity of the country, OK? And this is something that continues happening, OK? And also, they asked some governments to support the country in the cybersecurity part. Then in this context, we have tried to integrate the gender perspective in the cybersecurity strategy, yes. Then what do we do to try to integrate the gender in the cybersecurity strategy? One thing that we do was. to convene civil society organisations as a network, and we, as representative of the civil society organisation, we convene a network of organisations that were not interested in cyber security at all. Organisation working with kids and young people, organisation working with sexual workers, organisation working in environment, organisation working in VHS, organisation, LGBT organisation, a really big network of organisations to do the advocacy based in this big movement. If not, it’s for us impossible to integrate a gender perspective in the strategies. One first thing I don’t see here, we participate on that, then it’s something for ourselves also, is something that we have to do with this organisation was a training programme about what cyber security is, yes, and why it’s important for organisation working in indigenous aspects, yes, that the education part about what cyber security is, using a popular education approach, I think it’s something that we have to do. For this organisation, even more than cyber security, they are worried about the management of the personal data, yes, and not necessarily it’s connected, but it’s not the same. Then, I think this is something that we have learned, we have to dedicate almost six months of training programmes to really, for the people to understand not only what is cyber security, but which is the connection between cyber security and sexual workers, for instance, yes, then this process is, for me, crucial, crucial because it’s the only way to really advocate, we believe a lot in that advocacy-based … social movements. Then this is one point I wanted to say. We have discussed in, I don’t remember in which of the panels, but the issue of the consultation, because we were consulted by the strategy builders, yes, but this consultation, we participate a lot, we dedicate a lot of time, we did the recommendation, we put, we comment everything, and when the first cyber security strategy came out, any of our comments were integrated of the civil society comment. Then this consultation process is also something that we have to take a lot in count. I’m going to finish very fast, I have other things, but I wanted to say that after the hacking and the authoritarian government come, we continue the last cyber security strategy, I don’t know if that happened in your countries, but when the government change, they trash it, okay? All these processes, they trash it, and they begin another process, okay? They begin another process, and then this is something also that we have to take in count when we are working on these issues, because we have to begin again, all the process, all the process, all the process of developing the strategy. Something that I wanted to say also is in the second strategy, led by the private sector, by the big companies that have their headquarters in Costa Rica, my country, they are pushing a lot for having more women studying cyber security, and this is one of their most important strategies inside the cyber security strategy. Women study cyber security as the gender focus of the strategy, and, of course, this is wonderful, more women in IT, and all of that, even if this is because of the private sector agenda, to cover the deficit of human resources that they don’t have at this moment to answer to all the digital development, yes? And then this part we have to take a lot of care because it’s not necessarily one aspect related with the gender approach to the cyber security strategy. Just two more words for the other. We also have this policy, what we could was to integrate in the policy of gender science and technology a big area related with violence against women, a big area, a big strategy, yes? And then we could integrate that, and because this is the umbrella, maybe we can take this part to develop the strategy. And then we could integrate that, and we also could integrate data monitoring of the gender, and not only women, but gender, yes? Data monitoring related with VHS, people, sexual workers, etc., yes? About that violence against them. Even if we know violence against gender diversity is not the only thing, but those are the issues that at the moment, at this moment, we could integrate based in our practice. Then I leave it there. Okay. Thank you, Gemily. Yeah, so many great things that Gemily shared

Veronica Ferrari:
from their experience working at the national and regional level. from the need of awareness at the very beginning, the need to form coalitions and to be linked also with organizations working on other agendas, not necessarily on cybersecurity and gender, but human rights, development, children rights more broadly, and also the need to think a gender perspective in cyber beyond the idea of diversity and inclusion of more women in ICTs, which I think is really important. So I’d like now to turn to Grace. So Grace, you also have extensive experience working in cyber security policy at the national, the regional, and also the international levels, but direct work, for example, on cyber capacity building for groups that experience marginalization, such as women, but also persons with disability, person living in rural areas. So I wanted to ask you about, like, more the intersectional challenges that, for example, policy makers should consider when working on cyber security policy, and also how can these policy makers effectively address these intersectional challenges that are about gender, but also about broader inequality issues? So yeah, I would love to hear your thoughts about that.

Speaker 1:
Okay, thanks, Veronica. I think before I respond to your question, I just wanted to say that at Kiktonet, we work on cyber security and cyber hygiene, and that is in line with our mandate to push or agitate for inclusion of communities in ICTs and in whatever that we do. So for example, we have been working, we have dedicated an entire program on just working with women in all their diversity, and this has included training women in digital security and in cyber hygiene practices, and just encouraging them to form communities of practice so that they’re able to protect each other, especially when they are attacked online, and to also to be able to push for their issues so that then they can get that policy attention. We also, you know, in terms of also. working with other groups like you have raised. We work with persons with disabilities. We also work with farmers, we work with home care givers, and we also work with youth in the informal settlements. In terms of supporting, that’s our work at national level, and we also sit at the Kenya SART as the civil society representatives. In terms of regional work, we run what we call TATUA, Digital Resilience Center, TATUA. It’s Swahili, meaning that it will solve, solve, and this is to support social justice organizations, organizations that are working in very sensitive issues to basically enhance their digital resilience. And then, of course, internationally, we participate in the open-ended working group, just making sure that we are bringing on board the perspectives of ordinary people in those conversations. Now, the question that you have asked me about shaping that, you know, about contributing, or what policymakers should consider at that intersectionality, I think the first thing that I want to say is that cyber security, unless, unlike your other policy issues, I think, you know, it’s a complex, it’s a complex issue, as it requires a multifaceted field that intersects with different stakeholders and different domains. And therefore, because of that, I think policymakers need to consider certain range of intersectional challenges. But I’m just going to highlight three, and one of them that has been drawn from my experience of working. the community members who are not particularly knowledgeable about cyberaffections, I think you would agree that in some ways, it is titre singing on the issue of cybersecurity awareness. That policymakers sometimes will see in the cybersecurity policy, there is a need to understand what is happening in the cybersecurity policy, and what is happening in shaping some of these policies. So there is a need to understand what informs those who are the perpetrators, but also for ordinary people, do they really understand what cybersecurity is all about? So cybersecurity is very critical, that is a very important part of the policy. So, I think that is a very important part of the policy, and, you know, apart from coming up with the policies, there is need for them to be at the forefront of supporting awareness creation among citizens, and among businesses, and to support that collaboration between different sectors on how, you know, on how citizens are addressing the content that is being discussed by philosophers it can be used by proof-achiever, or platforms to help us celebrate the services and the corresponding existing law and conduct of the society. That’s one part of our perspective, and then, the second issue I wanna talk about is on human rights, you know, when we work in civil society, we work with ordinary people, we do not have a public place to give public space to speak about what other institutions are doing in such a way to listen to the sayings and what else do we think, this participation that’s being done, you know, by people like Tatyana who spend very little time contacting the So, I think that’s a very good point, and I think it’s important to think about how do we make sure that we have the right policies and the rights as a vital consideration, and when I say about vulnerable populations, it’s because there’s that element of thinking up here and forgetting that there are people who are affected here, and not thinking that the issues of the people down there matter. And, finally, when it comes to cyber attacks, I think it’s important to remember that we have a lot of young people who are in the system, and we have all these young people who are innovating, and we also have them innovating both positively and negatively because we have a lot of cyber attacks actually coming from young people who, you know, young people who are unemployed and are consistently thinking of how they can make money, so they are always thinking of how to break into banks, into companies, and, therefore, the tendency is to respond to that with, you know, with a policy that sometimes curtails innovation, and, therefore, policy-makers need to keep up with the rapidly-evolving technologies, and that ever-changing threat landscape, and the threat of landscape is that today, threats are going to be identified, and once people know that those have been identified, they are consistently thinking of how to go behind what has been done to come up with new threats, and, therefore, policy-makers need to be above, so there is need to balance that need for innovation with securing the digital infrastructure. Thank

Veronica Ferrari:
you. Thank you, Grace. I think, yes, it’s a really critical point, one of the things that you mentioned. You mentioned a lot of critical points, but I was thinking about the need to actually involve the communities and these groups that experience this. experience these differentiated impacts and have specific needs and perceptions around cybersecurity when policy securities are actually drafted, but also implemented and evaluated. So in the framework that we put together, there are some recommendations in that regard, so I think it’s a key point to have in mind. Thanks so much for sharing about that, Grace.

Speaker 2:
I would like to turn to Jess now, if that’s okay. So as I mentioned, we were organizing some regional conversations around this framework. We organized a good session during the Asia-Pacific IGF. So participants there highlighted challenges, for example, in the region related to a shrinking civic space, challenges for civil society inputs to be taken into account. That challenge clearly we heard from Kemley appears in other regions in the world. And also another thing that came up in that conversation is cyber-related laws that are ultimately used to censor and even criminalize. So you and your organization have done research and advocacy around those issues in the Philippines context. So I wanted to ask you if you can briefly share what were, for example, problems from a gender perspective with cybercrime legislation there in the Philippines. And it could be, I think, useful for all of us if you can share about what strategies you put in place to engage in cyber policy discussions to bring gender and feminist perspectives. So yeah, thanks so much, Jess. Thank you, Veronica, and thank you, firstly, for inviting us to share our experiences from the Philippines. We also have a national cybersecurity plan, which is actually currently in the process of being updated this year, so I hope we’ll have time later so I can also talk about that. But as to the cybercrime law, which is another piece of legislation that’s very crucial and impacts gender a lot, well, let me start by saying that the cybercrime law of the Philippines actually has a lot of problems. So from a human rights perspective. in general, so we have the criminalization of cyber libel, we have this very generic and wide-reaching blanket provision that imposes excessive penalties to crimes that are done with the use of ICTs, but one of the most problematic provisions, especially related to gender, is that the law introduced this new crime called cyber sex, and it was very broadly defined as the willful engagement, maintenance, control, or operation, directly or indirectly, of any lascivious exhibition of sexual organs or sexual activity with the aid of a computer system for favor or consideration. So it’s a very broad definition, and the law didn’t even define some of the critical terms here, but lascivious exhibition, what do we count as sexual organs, what do we count as sexual activity, which makes this provision prone to arbitrary interpretation of whoever is made to interpret it, and so that brings us a situation where even things like consensual acts done online, or artistic works, or works of art, or legitimate expressions of women and LGBTQ persons, for example, could fall under this criminalized provision, and also considering that the Philippines is still, the Philippine society and Philippine culture is still highly patriarchal, we’re very predominantly Catholic, so there’s still a lot of conservative values there, and with this policy being made subject to these kinds of moral standards, it really disproportionately endangers women, LGBTQ persons, and their rights to their freedom of expression. The good news is that this provision has actually been very recently repealed, I think early last year. It was not through an amendment of the entire cybercrime law, it was through a repealing provision under a new legislation on online sexual abuse and exploitation of children, so it was quite an unconventional route that it take, and it was not ideal. but also I think we also need to recognize that this was also a product of years of advocacy by women’s rights groups, by LGBTQ advocacy groups in the Philippines. And as to the second part of your question was on the strategies, so what’s the strategies that led to this small victory as I consider it? It was, like Kamli mentioned earlier, it was really working with the networks. It was a lot of collaboration and coordination across different advocacy groups. So women’s rights groups, for example, children’s rights groups, because like I said, it was repealed under a law on online sexual abuse and exploitation of children. So we worked also with children’s rights groups, LGBTQ groups. So because, like I said, because the law is very problematic on a lot of different points, it was also very clear to us early on that we had to also attack it from different points of entry. And it was also fortunate for us to have a champion in the Philippine Senate who is a staunch advocate of women’s rights and also remains open to speaking with civil society on various issues, including cyber policy. So that was, I think, it was a key point in pushing for that kind of legislative change. No worries. Thanks so much. Just, I think that was a key point, the idea of how, well,

Veronica Ferrari:
the idea of forming coalitions and also identifying the champion within the government that could actually be working on cyber security specifically or not. So I would like now to move to some of the international discussions, and to David, because I would like you to share a bit of what’s happening on the international level and how do gender considerations appear in some multilateral So I’m going to turn it over to David, who is going to talk about the importance of the multilateral processes on cybersecurity, for example, the UN open-ended working group on ICTs, and what are, if you can share, in your view, crucial factors that a gender perspective on international cybersecurity should consider moving forward. So over to you, David, thanks.

David Fairchild:
Great. Hi, everybody. It’s end of day, it’s day three, bottom of the seventh, nine o’clock in the morning, and I’m going to talk a little bit about the importance of the multilateral processes. I’m going to spend the time, I’ll pretty much dump most of it, and I think get to the point. Canada has long supported gender issues at the international level. It’s a core component of our foreign policy, and our foreign international aid policy, so I don’t think it goes without any saying that, of course, we support this issue entirely. It’s a core component of our foreign policy, and our foreign international aid policy, so I don’t think it goes without any saying that, of course, we support this issue entirely. It’s a core component of our foreign policy, and our foreign international aid policy, so I don’t think I will spend a lot of time on that, despite the fact that I’ve probably got two pages of notes, none of it is really that relevant. I think what is really relevant is sort of painting a bit of a canvas of what is going on, because I think what people only see is the final product, right? So, I think what we see is the final product, and I think what we see is a lot of the things that we see in between, in the interim period, behind the closed doors, where countries like Canada and like-mindeds are fighting for inclusion of specific language that I think we would all agree with, and there are a cast of countries which I won’t bother naming, I’m sure you can figure out who they are, are doing for their own purposes, have an alternative narrative that they’re pushing. This is a constant fight. It is not going away, and I would say that this is not going away. I do cover lots of UN agencies. I sit in Geneva, so including the Human Rights Council, where this is often a front-and-centre element to many negotiations. This is just more of a clarion call to repeat that we are not necessarily the war, we are winning battles, but the war is not over, and I think it is in critical importance that we continue to frame our activities in a rights-respecting way. So, I think this is an important message. The WG itself has a norm, Norm E, which says that countries must respect, in the uses of cyberspace, basic international frameworks, including UDHR. Some countries, as we know, don’t necessarily respect, they may respect the principles and say that they respect the framework, but their implementation of the framework is not going away. We are seeing backsliding on SOGI language, we are seeing efforts by some countries to reframe how we talk about rights away from individual rights to people-centric rights, which we know is a crafty way of reducing the role of the individual and upplaying the role of the state. These are unfortunately traps that some people fall into because what starts to happen is that these languages are brought to different forums, they’re brought in different ways, and some of the people in the meetings aren’t necessarily as imbued with the human rights expertise as in other places. So we see this in places, I cover the ITU, which is also a fascinating place if you want to spend a few hours. We see, one would think standards are not necessarily political, but we do find sometimes we get wrapped around the axle fighting over gender language. I’ve been up till midnight, two in the morning, fighting about inclusion on gender language in a technical standard negotiation. It’s not pleasant, but it’s necessary. And so I don’t really want to spend time with the notes because I don’t think that’s really what’s relevant. I think it’s really to reinforce to this community that of course, Canada, but in person, we are in the room, we are fighting, but we need support. I think we need to continue to raise our voices to those who disagree. I think we need to be sophisticated. There is also a trend of course of overemphasizing gender and that in fact has a strategic negative effect. So it’s being smart, it’s being nuanced, and it’s being appropriate to where we want to push it. But I think we just need to keep pushing. This is not going to go away, and frankly as we all see, cyber, digital, tech is becoming much more front and center in international geopolitics, geostrategic competition. And so I think there is a new demographic of fora that are not necessarily well imbued with the human rights understandings that other fora like the Human Rights Council and others have a much more sort of mature conversation and folks who understand the issue. So it’s imperative that we support the technical community. It’s imperative that we support the civil society in the member states to the extent that we can to understand why we need to make sure that there’s no backsliding and that we reinforce the existing international human rights frameworks. I think that’s more important than probably what somebody from Ottawa sent me yesterday. I will stop there.

Veronica Ferrari:
Thanks for that, David. Well, we just were talking about the need to identify champions and Canada has been pushing for the inclusion of this type of language in negotiations and also being a key ally in terms of civil society participation in some of these international processes and how important it is to have the groups as Grace and others were saying, affected by these operations in a differentiated way like in the discussion too or the organization that we try to bring these perspectives there. So thanks for that. I wanted now, I have a couple of questions. I don’t know if we can technically showcase them there instead of seeing my face at that size. So yeah, I have a couple of questions for the speakers but also in the case somebody wants to jump in from the audience or physical or online audience because I wanted to quickly hear your thoughts also on main challenges. Jess, you mentioned some of them but Kimley too. So main challenges you have faced or you consider you would face when advocating for gender and intersectional perspectives in cyber policy. Also, any thoughts on how a tool like this, this framework could. I’m going to ask you to provide some support for different stakeholders in integrating a gender perspective into cybersecurity policy and norms, and also what else, like what resources, support, do you think you need to champion gender in cybersecurity policy in your work, any specific resources or guidance that you think could be helpful? I just wanted to open the floor to see if there are any thoughts from the audience, but also I would like to hear from the speakers. So I see a hand there. Do you want to jump in? Yeah. Can you pass the mic to the colleague? Thank you.

Audience:
Thank you so much. My name is Ahmed Karim from UN Women’s Regional Office for Asia and the Pacific, and I have three quick questions. First one, which I also noticed here during IGF, that the conversation with private sector and tech companies is very gender blind, and most of the time it’s very generalizing all users in one basket, or take a global perspective, or the focus is just on the minors, but women and girls are excluded, but other genders are also not part of that design. And I wonder if you have any strategies or specific ways of how can we change that conversation and make them a little bit gender sensitive and include gender in the design itself of their platform? Second question is related to inclusion of the cybersecurity agenda in the national action plans, and I wonder if any of you have had that experience within the national context with national action plan, and what are the elements of the cybersecurity agenda that could be included there? And last question is more for David on those nuances in between the dark side before the text is finalized, and I wonder what are the main issues that really gets the pushback against inclusion of gender language in the final text, and what do you think is where is this coming from, and how can we from civil society and the UN can help in eliminating some of those concerns for inclusion? Thanks. Great question. Thanks so much for that. I see Angela’s hand. Do you want to jump in? And then we try to address or distribute the question. I’m not sure if you can hear me. I’m not sure if you can hear me. I’m not sure if you can hear me. Please go ahead. I wanted to attempt question 3 on what can we do to bring the gender agenda in the cybersecurity space and also to respond to your questions and concerns because I have the same concerns. And this is something I’ve spoke to with Grace that we need to have research on with ourselves. It’s very hard not to go into discussion I have but we both know that it disproportionately affects women, minorities, and sexual minorities. Even just thinking about what kind of data CASE Attackers have on complaints they receive. Reserved then on cybersecurity, it will give us helpful insights on how to deal with these issues. I think it’s very important to have a discussion about the impact in terms of even monetary and mental so that they can enrich the policy decisions that are made. So I think that’s my contribution to that question.

Veronica Ferrari:
ยป Thanks so much, Angela, for the contribution and the response to the colleague here. Shall we, do you want to address some of the questions? I think we have time for one or two. I think we have time for one or two. I think we have time for one or two. Kimberly? Kimberly, go ahead, please.

Kemly Camacho:
ยป I wanted to address the first question for me, it’s my passion, to be honest. Because I have been working for, I have been working for a long time in the field of cybersecurity, I have been working for a long time in the field of cybersecurity, and I have to say that we have switched a lot the focus of the work that we are doing there, okay? Because, and I wanted to say, this is really, I think, really important. Yes, because, you know, at the beginning we began with this idea of integrating more women in the IT sector, yes, and do capacity training for them to be integrated in the sector and to have opportunities of jobs in this sector, in the IT sector, because it’s really a sector of opportunities, yes, and I think this is good, yes, but it’s not enough, yes. More and more, we have now in Latin America, I have done for UNESCO a mapping of all the initiatives to attract women to the IT sector and to integrate women in the IT sector, and there is a lot, but at least in our region, the percentage, 20%, 80%, haven’t changed in 15 years, yes, it haven’t changed at all, even with all this effort and all this investment, it hasn’t changed a lot, yes, and I think this is because, I want to say, this is because this IT sector is very expulsive of the diverse, and the condition for women studying and working in the IT sector are hard, yes, then in one point, we decide we are going, instead of continue doing that, others are doing, and we think it’s part of the economical rights of the women, we are going, we are working very much more in creating a women leadership for the IT sector, yes, creating a women leadership, an analytical women leadership, understanding their own conditions, and this is connected with the science. cybersecurity, what means to be part of this society as women and we, women in the IT sector, how we can contribute to the fighting of the women in general? And this is where I connect with the third question, yes, is this solidarity, solididad as we call it in Spanish, yes, where we have to connect the process of getting this women leadership, yes, to reflect on cybersecurity from this really analytical and collective action of women in IT, supporting women. Then for us, this is the strategy of women. We think it’s very crucial that women work and study ITs. But the problem is that we have a lot of evidence because we have done a lot of research, a participatory research with them about this condition where they work and they study, yes, and that we have to change also. For us, this is part of the violence against women, some violence against women that we haven’t integrate in the discussion around violence against women. Then this is my question, a big leadership of women in IT supporting the women agenda, including cybersecurity. And just to finish, we understand cybersecurity as the right of the people to have a safe space on the digital world as they need a safe space in their neighborhood, yes? Then this is the way that we are focusing. Thank you for the question.

Veronica Ferrari:
Just do you want to quickly address some of the questions, then I’ll try to go to David, so we don’t forget that question about the pushback in international negotiation, and there is one more question. Go ahead briefly. Yes, very briefly, because it’s also related to what Kamali said. I was thinking about this. based on the questions that you post, but it might also address your concern. And I really think that we have to go back and re-evaluate our concepts of security. Because unless this is, like you said, how we frame security issues now is still very highly masculinized, you know, and unless this kind of thinking is addressed, everything that we would do, even if we push for policy changes, even if we encourage women to go into tech and ICT sector or the cybersecurity sector, that all of those will just be stopgap measures, you know. We will all, like a new policy will come in and it will regress to the same traditional frameworks that we’re used to and all of that. So, and this is what I also like the most about the APC framework, which is it highlights the need to really go back to our, the ways that we think about security and through that, then we will be able to change policy, change the frameworks, change the institutions and the structures that are you know, already very deeply ingrained in the security sectors now. And change, you know, the attitudes of the actors as well. So, people in government, even people from businesses and the private sector. So, I think that’s really where we need to start. Thanks so much for that, Jess. David, do you want to jump in on the question about discussions?

David Fairchild:
Oh, yeah. All right. Couple things. So, it’s not just, I’m gonna say this may sound a little bombastic, but it’s not just women who are the front and center. I mean, gender is not a gender-specific term. It’s also something that I think, you know, whether you’re a man, woman, or whatever you want to describe yourself as, it’s an effort that everybody has to get behind. So, I’d just like to sort of slightly correct the record that even though I’m a man, that doesn’t mean I can’t be highly supportive of the gender movement. That being said. So, the backslide and how we can fight it. I mean, it really, it’s an upstream. I’d say I would focus on the upstream. So, let’s take, for instance, the International Telecommunications Union. Not a very, let’s say, it’s an old organization. In fact, it’s the oldest organization in the UN. It’s not, it’s very technical. So, human rights is not something that comes up as an idea front and mind for many of these highly technical engineers and so on and so forth. So, it’s really education. But, of course, the, their demographic and the pools of interactions and stakeholders they deal with are not the same, you know, in the human rights world or otherwise. And so, there is a sort of reaching across the the hallway and reaching out, which is not, it’s partly our job, but also, I think, from a civil society. It’s, it’s, it’s, so it’s just a sort of like we say in French, les deux solitudes, the two solitudes, right? There are people who have their demographics and their stakeholders. Sometimes, it’s getting better, but it’s not great. A lot of it is simply because member states have certain things that are red lines. It’s normal. We have red lines when we’re in negotiations, which are framed around our values and our policies in the same way. I don’t have to agree with them. And so, the fight is about trying to find, obviously, the UN works on consensus, which, just to remind, is not unanimity, but consensus tends to focus on getting everybody to agree. And so, sometimes, some countries or blocs will hold out on something of substance because the gender language is something they don’t like. Sometimes, it’s a change. Sometimes, it’s, it’s to have it extracted. Sometimes, it’s just a useful, because they know it’s important to us, it’s used as a weapon to, for concessions and in other ways. So, that’s a bit of the, as they say, say, pulling back the kimono a bit to reveal a little bit what’s going on in the background. But I really want to just finish, and I realize we have two minutes left, so I’ll see the hand up. I won’t name the state, but the Human Rights Council Session 54 is currently ongoing. In one of the item 8 debates a few days ago, a stateโ€”I won’t be namedโ€”got up and, in a statement, called for the end of the integration of Soji language in UN documents on the basis that it’s not recognized as a legal form of discrimination under international law. Now, this state isn’t perhaps the one you might think would make this statement. I won’t name it. I’m happy to tell you offline. But just to give you an example, it’s happening even in the Human Rights Council. It’s happening everywhere. We have people who understand these debates in the Human Rights Council, and so can defend our valuesโ€”our valuesโ€”can defend the humanโ€”international human rights framework. So that doesn’t necessarily mean at an IEEE meeting, or at the IETF, or at the ITU, that those same expertise exist. So that’s where the civil society and, I think, stakeholders who are more educated need to work with and help those who don’t.

Audience:
Thanks, David, for that, and I’m aware of the time, but I want to give the opportunity to jump in. And there is another ticket you have in hand? Okay. I’m going to jump out, and then I can try to wrap up. Please. I can also talk about this after the session, but my name is Farzana Badi from Digital Medusa. So we are doing this research for USAID, and they are looking at what human-centered approaches to digital transformation, and one of the strategies that they have is cybersecurity to kind of incorporate cybersecurity in digital transformation. And I was wondering if you know of any kind of, like, gender framework that can help with these development organizations that help with digital transformation to consider gender as a factor when they want to have cybersecurity in place, and kind of like help from the beginning instead of doing things after the technology is in place.

Veronica Ferrari:
Thanks for that question. I know we have to finish the session. It’s okay, I encourage you all to continue the conversation after the session ends. We can, in fact, touch base because we have some recommendations in the framework about how to link this agenda to other agendas, to, for example, the agenda for sustainable development, also to digital economy indicators. So connecting those with broader arguments could be useful, for example, for a digital transformation strategy discussion, but we can continue the conversation after the session. I don’t know, Grace, I want to give you the opportunity to say something before we close, if you want to. No, okay. Thank you for being mindful of the time, and thank you all for the discussions. There are a lot of great points. The need to continue to keep pushing for this, also to produce more research, more evidence, and the importance of continue creating awareness and rethinking the concept of security, as Jess was saying. So thanks so much. Please reach out to APC if you want to stay in touch, and enjoy the rest of the IGF. Bye. Thank you so much. Thank you all for coming out. And have a good day. Thank you to all of you for being here, and we’ll see you next time.

Audience

Speech speed

169 words per minute

Speech length

674 words

Speech time

240 secs

David Fairchild

Speech speed

208 words per minute

Speech length

1671 words

Speech time

482 secs

Kemly Camacho

Speech speed

127 words per minute

Speech length

2032 words

Speech time

957 secs

Speaker 1

Speech speed

174 words per minute

Speech length

1041 words

Speech time

359 secs

Speaker 2

Speech speed

143 words per minute

Speech length

892 words

Speech time

375 secs

Veronica Ferrari

Speech speed

190 words per minute

Speech length

3218 words

Speech time

1015 secs

Decolonise Digital Rights: For a Globally Inclusive Future | IGF 2023 WS #64

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Ananya Singh

The analysis features speakers discussing the exploitation of personal data without consent and drawing parallels to colonialism. They argue that personal data is often used for profit without knowledge or permission, highlighting the need for more transparency and accountability in handling personal data. The speakers believe that the terms of service on online platforms are often unclear and full of jargon, leading to misunderstandings and uninformed consent.

One of the main concerns raised is the concept of data colonialism, which is compared to historical colonial practices. The speakers argue that data colonialism aims to capture and control human life through the appropriation of data for profit. They urge individuals to question data-intensive corporate ideologies that incentivise the collection of personal data. They argue that the collection and analysis of personal data can perpetuate existing inequalities, lead to biases in algorithms, and result in unfair targeting, exclusion, and discrimination.

In response, the speakers suggest that individuals should take steps to minimise the amount of personal data they share online or with technology platforms. They emphasise the importance of thinking twice before agreeing to terms and conditions that may require sharing personal data. They also propose the idea of digital minimalism, which involves limiting one’s social media presence as a way to minimise data.

The analysis also highlights the need for digital literacy programmes to aid in decolonising the internet. Such programmes can help individuals navigate the internet more effectively and critically, enabling them to understand the implications of sharing personal data and make informed choices.

Overall, the speakers advocate for the concept of ownership by design, which includes minimisation and anonymisation of personal data. They believe that data colonialism provides an opportunity to create systems rooted in ethics. However, they caution against an entitled attitude towards data use, arguing that data use and reuse should be based on permissions rather than entitlements or rights.

Some noteworthy observations from the analysis include the focus on the negative sentiment towards the unregulated collection and use of personal data. The speakers highlight the potential harm caused by data exploitation and advocate for stronger regulation and protection of personal data. They also highlight the need for a more informed and critical approach to online platforms and the terms of service they offer.

In conclusion, the analysis underscores the importance of addressing the exploitation of personal data without consent and the potential harms of data colonialism. It calls for more transparency, accountability, and individual action in minimising data sharing. It also emphasises the need for critical digital literacy programmes and promotes the concept of ownership by design to create ethical systems.

Audience

The discussions revolved around several interconnected issues, including legal diversities, accessibility, privacy, and economic patterns. These topics were seen as not always being respected globally due to economic interests and the perpetuation of stereotypes. This highlights the need for increased awareness and efforts to address these issues on a global scale.

One of the arguments put forth was that privacy should be considered as a global right or human right. This suggests the importance of acknowledging privacy as a fundamental aspect of individual rights, regardless of geographical location or cultural context.

Another point of discussion was the need for a taxonomy that identifies specific local needs and how they relate to cultural, historical, or political characteristics. The argument advocates for better understanding and consideration of these factors to address the unique requirements of different communities and regions. This approach aims to reduce inequalities and promote inclusive development.

The distinction between local and global needs was also highlighted as crucial for effective population planning and reducing migration to the Global North. By focusing on empowering individuals to thrive in their country of origin, the discussion emphasized the importance of creating conditions that allow people to stay and contribute to their local communities.

The importance of reimagining digital literacy and skills training was emphasized as essential for empowering marginalized communities. This involves providing equitable access to digital tools and promoting inclusivity in digital participation. Bridging the digital divide was seen as necessary to ensure that everyone has the necessary tools and skills to fully participate in the digital world.

The discussions also delved into the decolonization of the Internet and the digital landscape. It was recognized that this is an ongoing journey that requires continuous reflections, open dialogue, and actionable steps. The complexities surrounding decolonization were explored in relation to factors such as economic gains and the question of who benefits from the current digital landscape.

Lastly, the need to strive for a digital space that is inclusive and empowers all individuals, regardless of their background or geographical location, was highlighted. This vision of a future in which the internet becomes a force of equality, justice, and liberation motivates efforts towards digital inclusivity and empowerment.

In conclusion, the discussions explored various critical aspects related to legal diversities, accessibility, privacy, and economic patterns. They underscored the importance of addressing these issues globally, recognizing privacy as a universal right, understanding local needs, bridging the digital divide, and advocating for a decolonized digital space. The overall emphasis was on promoting inclusivity, reducing inequalities, and fostering empowerment in the digital age.

Jonas Valente

The analysis highlights several important points from the speakers’ discussions. Firstly, it is noted that the development and deployment of artificial intelligence (AI) heavily rely on human labor, particularly from countries in the global South. Activities such as data collection, curation, annotation, and validation are essential for AI work. This dependence on human labor underscores the important role that workers from the global South play in the advancement of AI technologies.

However, the analysis also reveals that working conditions for AI labor are generally precarious. Workers in this industry often face low pay, excessive overwork, short-term contracts, unfair management practices, and a lack of collective power. The strenuous work schedules in the sector have also been found to contribute to sleep issues and mental health problems among these workers. These challenges highlight the need for improved working conditions and better protections for AI labor.

One positive development in this regard is the Fair Work Project, which aims to address labor conditions in the AI industry. The project evaluates digital labor platforms based on a set of fair work principles. Currently operational in almost 40 countries, the Fair Work Project rates platforms based on their adherence to these principles, including factors such as pay conditions, contract management, and representation. This initiative seeks to improve conditions and drive positive change within the AI labor market.

Another concern raised in the analysis is the exploitation of cheap labor within the development of AI. Companies benefit from the use of digital labor platforms that bypass labor rights and protections, such as minimum wage and freedom of association. This trend, which is becoming more common in data services and AI industries, highlights the need for a greater emphasis on upholding labor rights and ensuring fair treatment of workers, particularly in the global South.

Furthermore, the analysis underscores the importance of considering diversity and local context in digital technology production. Incorporating different cultural expressions and understanding the needs of different populations are key factors in creating inclusive and fair digital labor platforms and global platforms. By doing so, the aim is to address bias, discrimination, and national regulations to create a more equitable digital landscape.

The analysis also acknowledges the concept of decolonizing digital technologies. This process involves not only the use of digital technologies but also examining and transforming the production process itself. By incorporating the labor dimension and ensuring basic fair work standards, the goal is to create a structurally different work arrangement that avoids exploitation and supports the liberation of oppressed populations.

In conclusion, the analysis highlights the challenges and opportunities surrounding AI labor and digital technology production. While the global South plays a crucial role in AI development, working conditions for AI labor are often precarious. The Fair Work Project and initiatives aimed at improving labor conditions are prominent in the discussion, emphasizing the need for fair treatment and better protections for workers. Additionally, considerations of diversity, local context, and the decolonization of digital technologies are crucial in creating a more inclusive and equitable digital landscape.

Tevin Gitongo

During the discussion, the speakers emphasised the importance of decolonising the digital future in order to ensure that technology benefits people and promotes a rights-based democratic digital society. They highlighted the need for creating locally relevant tech solutions and standards that address the specific needs and contexts of different communities. This involves taking into consideration factors such as cultural diversity, linguistic preferences, and social inclusion.

The importance of stakeholder collaboration in the decolonisation of digital rights was also emphasised. The speakers stressed the need to involve a wide range of stakeholders, including government, tech companies, fintech companies, academia, and civil society, to ensure that all perspectives and voices are represented in the decision-making process. By including all stakeholders, the development of digital rights frameworks can be more inclusive and reflective of the diverse needs and concerns of the population.

Cultural context was identified as a crucial factor to consider in digital training programmes. The speakers argued that training programmes must be tailored to the cultural context of the learners to be effective. They highlighted the importance of working with stakeholders who have a deep understanding of the ground realities and cultural nuances to ensure that the training programmes are relevant and impactful.

The speakers also discussed the importance of accessibility and affordability in digital training. They emphasised the need to bridge the digital divide and ensure that training programmes are accessible to all, regardless of their economic background or physical abilities. Inclusion of people with disabilities was specifically noted, with the speakers advocating for the development of digital systems that cater to the needs of this population. They pointed out the assistance being provided in Kenya to develop ICT standards for people with disabilities, highlighting the importance of inclusive design and accessibility in digital training initiatives.

Privacy concerns related to personal data were identified as a universal issue affecting people from both the global north and south. The speakers highlighted the increasing awareness and concerns among Kenyans about the protection of their data, similar to concerns raised in European countries. They mentioned the active work of the office of data commissioner in Kenya in addressing these issues, emphasising the importance of safeguarding individual privacy in the digital age.

The speakers also emphasised the need for AI products and services to be mindful of both global and local contexts. They argued that AI systems should take into account the specific linguistic needs and cultural nuances of the communities in which they are used. The speakers raised concerns about the existing bias in AI systems that are designed with a focus on the global north, neglecting the unique aspects of local languages and cultures. They stressed the importance of addressing this issue to bridge the digital divide and ensure that AI is fair and effective for all.

Digital literacy was highlighted as a tool for decolonising the internet. The speakers provided examples of how digital literacy has empowered individuals, particularly women in Kenya, to use digital tools for their businesses. They highlighted the importance of finding people where they are and building on their existing skills to enable them to participate more fully in the digital world.

One of the noteworthy observations from the discussion was the need to break down complex information, such as terms and conditions, to ensure that individuals fully understand what they are agreeing to. The speakers noted that people often click on “agree” without fully understanding the terms and emphasised the importance of breaking down the information in a way that is easily understandable for everyone.

Overall, the discussion emphasised the need to decolonise the digital future by placing people at the centre of technological advancements and promoting a rights-based democratic digital society. This involves creating inclusive tech solutions, collaborating with stakeholders, considering cultural context in training programmes, ensuring accessibility and affordability, addressing privacy concerns, and bridging the digital divide through digital literacy initiatives. By adopting these approaches, it is hoped that technology can be harnessed for the benefit of all and contribute to more equitable and inclusive societies.

Shalini Joshi

The analysis highlights several important points related to artificial intelligence (AI) and technology. Firstly, it reveals that AI models have inherent biases and promote stereotypes. This can result in inequalities and gender biases in various sectors. Experiments with generative AI have shown biases towards certain countries and cultures. In one instance, high-paying jobs were represented by lighter-skinned, male figures in AI visualisations. This not only perpetuates gender and racial stereotypes but also reinforces existing inequalities in society.

Secondly, the analysis emphasises the need for transparency in AI systems and companies. Currently, companies are often secretive about the data they use to train AI systems. Lack of transparency can lead to ethical concerns, as it becomes difficult to assess whether the AI system is fair, unbiased, and accountable. Transparency is crucial to ensure that AI systems are developed and used in an ethical and responsible manner. It allows for scrutiny, accountability, and public trust in AI technologies.

Furthermore, the analysis points out that AI-based translation services often overlook hundreds of lesser-known languages. These services are usually trained with data that uses mainstream languages, which results in a neglect of languages that are not widely spoken. This oversight undermines the preservation of unique cultures, traditions, and identities associated with these lesser-known languages. It highlights the importance of ensuring that AI technologies are inclusive and consider the diverse linguistic needs of different communities.

Additionally, the analysis reveals that women, trans people, and non-binary individuals in South Asia face online disinformation that aims to marginalise them further. This disinformation uses lies and hate speech to silence or intimidate these groups. It targets both public figures and everyday individuals, perpetuating gender and social inequalities. In response to this growing issue, NIDAN, an organisation, is implementing a collaborative approach to identify, document, and counter instances of gender disinformation. This approach involves a diverse set of stakeholder groups in South Asia and utilises machine learning techniques to efficiently locate and document instances of disinformation.

The analysis also highlights the importance of involving local and marginalised communities in the development of data sets and technology creation. It emphasises that hyperlocal communities should be involved in creating data sets, as marginalised people understand the context, language, and issues more than technologists and coders. Inclusive processes that include people from different backgrounds in technology creation are necessary to ensure that technology addresses the needs and concerns of all individuals.

In conclusion, the analysis underscores the pressing need to address biases, promote transparency, preserve lesser-known languages, counter online disinformation, and include local and marginalised communities in the development of technology. These steps are crucial for creating a more equitable and inclusive digital world. By acknowledging the limitations and biases in AI systems and technology, we can work towards mitigating these issues and ensuring that technology is a force for positive change.

Pedro de Perdigรฃo Lana

The analysis highlights several concerns about Internet regulation and its potential impact on fragmentation. It argues that governmental regulation, driven by the concept of digital colonialism, poses a significant threat to the Internet. This is because such regulations are often stimulated by distinctions that are rooted in historical power imbalances and the imposition of laws by dominant countries.

One example of this is seen in the actions of larger multinational companies, which subtly impose their home country’s laws on a global scale, disregarding national laws. For instance, the Digital Millennium Copyright Act (DMCA) is mentioned as a means by which American copyright reform extends its legal systems globally. This kind of imposition from multinational companies can undermine the sovereignty of individual nations and lead to a disregard for their own legal systems.

However, the analysis also recognizes the importance of intellectual property in the discussions surrounding Internet regulations. In Brazil, for instance, a provisional measure was introduced to create barriers for content moderation using copyright mechanisms. This indicates that intellectual property is a crucial topic that needs to be addressed in the context of Internet regulations and underscores the need for balance in protecting and respecting intellectual property rights.

Another important aspect highlighted is platform diversification, which refers to the adaptation of platforms to individual national legislation and cultural contexts. It is suggested that platform diversification, particularly in terms of user experience and language accessibility, may act as a tool to counter regulations that could lead to fragmentation of the Internet. By ensuring that platforms can adapt to different national legislations, tensions can be alleviated, and negative effects can be minimized.

Pedro, one of the individuals mentioned in the analysis, is portrayed as an advocate for the diversification of internet content and platforms. Pedro presents a case in which internet content-based platforms extended US copyright laws globally, enforcing an alien legal system. Thus, diversification is seen as a means to counter this threat of fragmentation and over-regulation.

The analysis also explores the concern of multinational platforms and their attitude towards the legal and cultural specificities of the countries they operate in. While it is acknowledged that these platforms do care about such specifics, the difficulty of measuring the indirect and long-term costs associated with this adaptation is raised.

Furthermore, the discrepancy in the interpretation of human rights across cultures is highlighted. Human rights, including freedom of expression, are not universally understood in the same way, leading to different perspectives on issues related to Internet regulation and governance.

The importance of privacy and its differing interpretations by country are also acknowledged. It is suggested that privacy interpretations should be considered in managing the Internet to strike a balance between ensuring privacy rights and maintaining a safe and secure digital environment.

The analysis concludes by emphasizing the need for active power sharing and decolonization of the digital space. It underscores that preserving the Internet as a global network and a force for good is crucial. The failure of platforms to diversify and respect national legislation and cultural contexts is seen as a factor that may lead to regional favoritism and even the potential fragmentation of the Internet.

In summary, the analysis highlights the concerns about Internet regulation, including the threats posed by governmental regulation and the subtle imposition of home country laws by multinational companies. It emphasizes the importance of intellectual property in the discussions surrounding Internet regulations, as well as the potential benefits of platform diversification. The analysis also highlights the need for active power sharing, the differing interpretations of human rights, and considerations for privacy. Overall, preserving the Internet as a global network and ensuring its diverse and inclusive nature are key priorities.

Moderator

The analysis delves into the various aspects of the impact that AI development has on human labour. It highlights the heavy reliance of AI development on human labour, with thousands of workers involved in activities such as collection, curation, annotation, and validation. However, the analysis points out that human labour in AI development often faces precarious conditions, with insufficient arrangements regarding pay, management, and collectivisation. Workers frequently encounter issues like low pay, excessive overwork, job strain, health problems, short-term contracts, precarity, unfair management, and discrimination based on gender, race, ethnicity, and geography. This paints a negative picture of the working conditions in AI prediction networks, emphasising the need for improvements.

The distribution of work for AI development is another area of concern, as it primarily takes place in the Global South. This not only exacerbates existing inequalities but also reflects the legacies of colonialism. Large companies in the Global North hire and develop AI technologies using a workforce predominantly from the Global South. This unbalanced distribution further contributes to disparities in economic opportunities and development.

The analysis also highlights the influence of digital sovereignty and intellectual property on internet regulation. It argues that governments often regulate the internet under the pretext of digital sovereignty, which extends the legal systems of larger nations to every corner of the globe. This practice is justified through the concept of digital colonialism, where multinational companies subtly impose alien legislation that does not adhere to national standards. Intellectual property, such as the DMCA, is cited as an example of this behaviour. To counter this, the analysis suggests that diversification of internet content and platforms can be an essential tool, safeguarding against regulations that may result in fragmentation.

Furthermore, the analysis emphasises the need for documentation and policy action against gender disinformation in South Asia. Women, trans individuals, and non-binary people are regularly targeted in the region, with disinformation campaigns aimed at silencing marginalised voices. Gender disinformation often focuses on women in politics and the public domain, taking the form of hate speech, misleading information, or character attacks. The mention of NIDAN’s development of a dataset focused on gender disinformation indicates a concrete step towards understanding and addressing this issue.

Digital literacy and skills training are highlighted as important factors in bridging the digital divide and empowering marginalised communities. The analysis emphasises the importance of democratising access to digital education and ensuring that training is relevant and contextualised. This includes providing practical knowledge and involving the user community in the development process. Additionally, the analysis calls for inclusive digital training that takes into consideration the needs of persons with disabilities and respects economic differences.

The analysis also explores the broader topic of decolonising the internet and the role of technology in societal development. It suggests that the decolonisation of digital technologies should involve not only the use of these technologies but also the production process. There is an emphasis on the inclusion of diverse perspectives in technology creation and data analysis to avoid biases and discrimination. The analysis also advocates for the adaptation of platform policies to respect cultural differences and acknowledge other human rights, rather than solely adhering to external legislation.

In conclusion, the analysis provides a comprehensive assessment of the impact of AI development on human labour, highlighting the precarious conditions faced by workers and the unequal distribution of work. It calls for improvements in labour conditions and respect for workers’ rights. The analysis also raises awareness of the need to document and tackle gender disinformation, emphasises the importance of digital literacy and skills training for marginalised communities, and supports the decolonisation of the internet and technology development. These insights shed light on the challenges and opportunities in ensuring a more equitable and inclusive digital landscape.

Session transcript

Moderator:
Hi, good morning, good afternoon, and good evening to all those who are joining us on site or online. Welcome to our workshop. This workshop is called Decolonize Digital Rights for a Globally Inclusive Future. Before we begin, I would like to encourage both on site and remote participants to scan the QR meter, the code that’s just on the screen here. And, you know, the link is being published on the Zoom right now to express your expectations for the session. And as a reminder, I would also like to request that all the speakers and the audience whom we ask questions during the question and answer round to please speak clearly and at a very reasonable pace. I would like to also request that everyone participating to maintain a respectful and inclusive environment in the room or in the chat. For those who wish to ask questions during the question and answer sessions, please raise your hands. And once I call upon you, if on site, please take the microphone at this the left or the right side. Clearly state your name, the country you come from, and then you can go ahead and ask the question. Additionally, please make sure that all mics are muted and other devices and other audio devices are also muted just to avoid disruptions. If you have any questions or comment or would like the moderator to read out your questions or comments online, please type it in the Zoom chat. And when posting, please start and end your sentence with a question mark to indicate whether it’s a question or a comment. Thank you. We may now begin our session. So, thank you for joining the session, whether you are online or on-site, and it is going to be a very thought-provoking session that is going to delve into decolonisation of the Internet. I am Mariam Job, and I will be your on-site moderator for today’s session. Online, we have Nelly, who is going to be the moderator, and we have Keolu Bojil, who is going to be a reporter for today’s session. Keolu Bojil, my sincere apologies if I miss the pronunciation of the name, he is going to be a reporter for the session. Today, we are gathered here to confront the very uncomfortable truth that the Internet is a space where everyone is not always equal, and it is very far from being a place where everyone is equal. In the U.K., there is a very strong idea that all groups are equal because of historical bias and power imbalances. Maginalised groups continue to face barriers in the creation and design of technology. This often results to digital colonialism, and the dominance of privileged groups and shaping technology design often leads to discrimination and discrimination in the U.K. and in the global south, and perpetuating linguistic bias and slower content removal from non-English content, regardless of the magnitude of hate or harm. The unequal response to these strategies, however, further highlights the disparity. While future such as safety check and on one click option to ensure that the U.K. is safe, the U.K. is safe, and the U.K. is safe, the U.K. is safe, and the U.K. is safe, and the U.K. is safe, and the U.K. is safe. While platforms have introduced fact-checking measures for major elections in the west, misuse of data and information on the U.K. information and disinformation, whereas some platforms continue to plague the global south. However, the under-representation of authors of color on online knowledge platforms paints a stark picture of the inequalities that persist. Even voice assistants designed to assist and interact with users have been found to reinforce gender biases, normalize sexual harassment, and perpetuate conversational behavior patterns imposed on women and girls. This not only limits their autonomy, but also puts them in the forefront of errors and biases. Hate speech targeting marginalized communities continues to rage online, creating a very unsafe environment for those from the global south and those from the marginalized communities. Users in the global south also have the right to feel safe and to feel the same autonomy as users in the global north. In this workshop today, we are going to delve into the concept of decolonization in relation to the internet, in relation to technology, and human rights and freedoms online. Our esteemed panelists, who will be joining us too, we have two on-site and we have online panelists as well, they will unpack the evidence that exists of gender stereotypes, linguistic bias, and racial injustice that are coded into technology. They will shed light on how apps are often built based on creators opinions of what the average user should or should not prefer. Furthermore, they will also offer recommendations on how online knowledge can be decentralized and how ideological influences can be delinked from the digital arena. They will propose practices and processes that can help decolonize the internet and transform it into a truly global, interoperable space. Throughout the sessions, we’re going to address three policy questions. One is that, what are the colonial manifestations of technologies such as language, gender, media, and artificial intelligence? in the digital domain, and how do we address internet discrimination against people of color and the intersectionality that exists in social intelligence that are emerging on the Internet. Two is that how do we address these legacies that shape the Internet and have become the ongoing chron verklzing, and determine its future. How do we address the intersectionality that exists in the Internet? How do we address the intersectionality that exists in our technology and the digital arena as a whole? How can we better include marginalized communities in these discussions? We hope that by attending the session, participants will get a deeper understanding to the context of decolonization in relation to the Internet and will learn to recognize the ways in which bias is built into our technology and understand how we are supporting discrepancy in IT services. And fourth, we hope to lead a connection into code, drawing data from actors’ beliefs and systems that perpetuate stereotypes and historical prejudice. During the session, we hope and aim to have a conversation of how we lead to ensure that we de-colonize technology and digital space and pave a way for a more inclusive process related to endless mainstreaming and migration and transgenic activities in Africa. Today we’re going to be hearing a number Let me introduce you to our students with the dual and security manager and presenter for the youth scale training team in the digital resource space. Linnea Ayaligus is … and is involved in addressing this She’s also the co-founder of Khabar, India’s only independent digital rural news network. And one of the next speaker is Ananya, who’s here with us in person. She’s the youth advisor of USAID’s Digital Youth Council. Over the recent years, Ananya has been very active in the Global Digital Development Forum and has also been a Next Generation ICANN ambassador over the 64, the ICANN 64, ICANN 68 and ICANN 76. Ananya holds a master’s degree in development and labor studies from Jawaharlal University, New Delhi. This is Ananya. We have Pedro, who is joining us online as well. He is an innovation lawyer at Systema Industry, has a PhD student at UFBR with an LM, with a LMM from the University of Coimbra. He is a board member of IODA, ISOC Brazil, and the Creative Commons Brazil, and an organizer of the Youth LACIF. We have Marinda, who’s here with us in person. Tevin, he is from GIZ and is a tech and policy lawyer based in Nairobi, Kenya. He holds a data governance, he heads rather, he heads the data governance division at the DTC GIZ Kenya, and previously he worked as a data protection advisor at GIZ. He also serves as the secretary of the Kenya Privacy Professional Association. And with that, we begin the session today. We will start with Jonas, who’s joining us online to have a brief presentation. Yes, Jonas, are you with us?

Jonas Valente:
Yes, yes, good afternoon. Can I get the possibility of sharing my screen?

Moderator:
Yes, please, let me see that. Can you see it?

Jonas Valente:
Yes, yes, we can see your screen now..Okay, thank you so much. Good afternoon for you. Good morning for me in London right now. And good morning even more for Pedro in Brazil. So it’s an honor for us from the Fair Work Project. to join this panel. I’m going to talk about the labor conditions in AI global prediction networks. And this is super important because normally, we look in the digital rights community to the effects of technologies like AI, but we need to look also to the workers who are producing that. So the first assumption is that AI development and deployment is super dependent on human labor. And unfortunately, this human labor is characterized by a set of features that make it very precarious and with very, let’s say insufficient arrangements regarding a set of conditions like pay, management and collectivization. When we talk about data work, we talk about activities like collection, curation, annotation, validation, and throughout all these chain, you have human labor. So when we talk about artificial intelligence, it’s important to know that it’s not so artificial. So we need like thousands of workers and those thousands of workers are distributed all around the world. But this distribution is not random or neutral. This distribution express the legacies of colonialism. When we have big companies in the global North who are hiring and developing these technologies and a workforce mainly in the global South, we can see here how the main countries are India, Bangladesh, Pakistan. We also have a workforce in the United States or the United Kingdom, but mainly global South countries are taking part in this through business process outsources or digital labor platforms. The Fair Work Project assesses digital labor platforms against a set of principles. And we try to address the risks of platform work and the platform economy and which risks are that. First of all, a risk of low pay. Our Cloud Work Report launched this year showed how. micro workers earned around two dollars an hour and other reports and studies show the same. So of course when you’re talking about some countries considering the currency this may be not so bad but what the studies are showing is that those payments structures and payment amounts they are super insufficient to ensure like adequate and meaningful livelihoods. Another problem is the excessive overwork and job strength. So this leads to health issues. We have workers working 15 16 hours. Normally workers need to switch day by night because they need to be awake during the global north time instead of being awake in their own country time and this leads to exhaustion leads to problems related to sleep and very other mental health questions that we’ve been found finding in our studies. Also workers suffer with short-term contract and precarity. So normally if you have a business process outsourced you have a one month or a two month contract and when we mention cloud work platforms you don’t have a contract in a traditional sense and these workers need to search for tests all the time. Our 2022 report showed that those workers worked eight hours on unpaid tasks and once again this is a legacy that we see of colonial and capitalist regimes and work arrangements. Those workers suffers with unfair management and especially with discrimination and you can see this discrimination based on gender based on race and ethnicity and based on geography and we can see the legacies of colonialism. Now there’s also data data work workers they face this personalized as bully, and they are subject to extreme surveillance. And finally, another risk is the lack of collective power. And of course, that this turns into more asymmetries between workers and platforms. The Fair Work Project, it’s working across all over the world, almost 40 countries. It’s coordinated by the Oxford Internet Institute and the Vis-a-vis Social Center in Berlin, and funded mainly by JSF, connected to the German government. We are assessing location-based platforms, cloud work platforms, and AI. And we have this five principles, pay, sorry about that. We have this five principles, pay conditions, contract managements, and representations. We collect data from different sources, and we rank platforms, and to finish, our AI project is looking at-

Moderator:
Jonas, please help us round up.

Jonas Valente:
Yeah, I’m rounding up, but this is the last slide. We are assessing specific AI companies, and when we try to do that, we try to show that the platform economy can be different, and to be different is part of the decolonizing process of AI technologies. Thank you so much.

Moderator:
Thank you so much, Jonas. That was quite insightful with data to back it up, that we could actually look at the fact that these are concerning issues when it comes to the decolonization of the internet. We’re gonna take another five-minute presentation from another of our online speakers, Shalini. Please go ahead and share your presentation.

Shalini Joshi:
Thank you. I don’t have a presentation, but I made some points for the discussion today. Thank you very much to IGF. Thank you to the organizers of this workshop. It’s a real honor to be here. I’m going to talk about the problems with AI in terms of gender, in terms of language, and I’m also going to talk about the work that Mee Dan, the organization that I work with. work with has been doing in order to address some of these issues. So as we all know that there have been experiments that have been carried out with generative AI on how different image generators visualize people from different countries and cultures. And when we look at these images, they almost always promote biases and stereotypes related to those countries and cultures. When text-to-image models were prompted to create representations of workers for high-paying jobs and low-paying jobs, high-paying jobs were dominated by subjects with lighter skin tones and were mostly male-dominated. Images that we see don’t represent the complexity and the heterogeneity and diversity of many cultures and people. We also know that AI models have inherent biases that are representative of the data sets that they are trained on. Image generators are being used for several applications and many industries and even in tools that have been designed to make forensic sketches of crime suspects. And this can cause real harm. A lot of the models that are used tend to assume a western context and the AI systems look for patterns in data on which they are trained, often looking at trends that are more dominant. And they are also designed to mimic what has come before, not create diversity. So we’re talking about inclusivity in technology. How do we ensure that AI technology is fair and representative, especially as more and more of us start using AI for the work that we are doing? Any technical solutions to solve for such bias would likely have to start with the training data that is being used. And to seek transparency from AI systems and from the companies that are involved is also really important. Because very often, these companies are very secretive about the data that they use to train their systems. There’s also the issue of language. Often, AI models are trained with data that uses mainstream languages. Often, these are languages of the colonizers. Many AI-based translation services use only major languages, overlooking hundreds of lesser-known languages. And some of these are not even lesser-known languages. So languages such as Hindi and Bengali and Swahili, which are spoken a lot by people and by many people, they also need more resources to develop AI solutions. And from a sociocultural standpoint, preserving these languages is vital, since they hold unique traditions, knowledge, and an entire culture’s identity, while protecting their richness and language diversity. So in this context, what is it that we are doing at Midan, the organization that I work with? We are a technology nonprofit. Over the last 10 years, as the internet has evolved and changed, Midan has maintained a unique position as a trusted partner and collaborator, working both with civil society organizations and with technology companies that harness the affordances of digital technology to communicate. Our approach has been consistent. We build collaborations, we build networks, and we build digital tools that make it easier for hyperlocal community perspectives to be integrated into how global information challenges are met. We understand that our ability to work across community technology and policy stakeholders is a privilege. And this is our unique contribution. We see ourselves as facilitators and enablers of change. And we do this by developing open source software that incorporates the state-of-the-art ML and AI technologies by building coalitions. A lot of these coalitions are built around large events, such as elections, that enable skills sharing and capacity building. And this multi-pronged approach strengthens collaboration and the ability for hyperlocal community perspectives and participation in addressing.

Moderator:
Thank you so much for that, Shalini. Thank you. That was quite insightful to learn about the work that you do and how the methods and codes in our technology and our internet that has been existing for as long as we’ve been using the internet. And if we don’t tackle them, if we don’t talk about them, if we don’t even realize that these stereotypes, these gender biases are coded into our internet and the way that we use it to digital technologies, we have a long way to go when it comes to decriminalizing the internet. We’re going to take another five-minute presentation or speech from another of our speakers. This one is on site. But before we do that, I would like to share some of the comments that we made about the expectations of the session. We see that people are expecting reflections, candid direction, articulation, radical, honest manifestations. Of course, the link is still on the Zoom chat. So if you would like to include your expectations, you may still go ahead and make the comment. Ananya, you may go ahead. ahead, please.

Ananya Singh:
Thank you so much. First of all, let me begin by saying that I’m very happy to be here in Japan. And no, it’s not just because Japan is such a beautiful country and the people here are so nice. I mean, well, I mean, of course they are. But also because I can finally live a day where I do not get spammed by calls from a range of companies trying to sell me their products, a bunch of coaching centers trying to send me to their engineering institutions with the aid of their tutors. By the way, I have a master’s degree in development studies, so engineering was clearly never my choice. Random call center agents forcing me to invest in certain deals or just another customer support automated call trying to divert my attention from my work. The one question that always comes to my mind when my phone rings and the Truecaller app detects it as a spam call is how did they get my number? Who gave them my number? And why did they give it to them? Why was I not asked? Given that it is my number and my number is connected to very obviously a ton of different data related to me and since I own both the number and any data related to that number, I should have been asked. But I wasn’t. And I’m sure we are all very familiar with those lottery emails. Come on, we have a dedicated spam folder where all those great deals and gone in a day bumper offers and their likes of ad emails keep lurking. So how did they choose you or me? I mean, I have never been that lucky in my entire life, by the way. So who gave them our email address? And if they found our email addresses, are they going to be very far from our residential addresses or our bank account numbers? So the way we live our lives has become excessively dependent on virtual and online activities and even more so after the pandemic. For instance, social media. inbuilt GPS, health apps, taxi apps, Google searches, everything, all of them require access to our personal data. Our details set to public or private are available for usage by online companies. The principal actors here capture our everyday social acts, translate them into quantifiable data, which is analyzed and used for the generation of profit. In the book, The Costs of Connection, the authors Nick Coldray and Ulysses Meijas also reiterate this view by emphasizing that instead of natural resources and labor, what is now being appropriated is human life through its conversion into data, meaning our online identities have become a commodity which can be exploited and used for capital gains, controlling our time and usage and influencing important decisions or processes in our lives. Hence the term data colonialism. But I know some people do contest the usage of the term data colonialism because historically, colonialism is unthinkable without violence, the takeover of lands and populations by sheer physical force. That’s true. But let’s take the example of the Spanish empire’s requerimiento or the demand document. It was made to inform the natives of the colonists’ right to conquest. Confiscators read this document out, demanding the natives’ acceptance of their new conditions in Spanish, which no local understood. Now think of the terms of service we sign up to every time we join a platform. They’re often unclear, long, full of jargons, which we rarely have the time to read, and so automatically, almost like a reflex, we click on, I agree. But do we really agree? Unknowingly, we are giving consent to being tracked online, being called at odd hours to be sold insurance policies for the children. By the way, I don’t have it. And hence our ignorance, our implied or uninformed consent for these kinds of data collection provides a very valuable yet free raw material. data. Once a senior official from a very famous company stated that data is more like the sunlight than oil, meaning a resource that can be harvested sustainably for the benefit of humanity. But this very idea makes my personal data a non-excludable natural resource available for public use. But does it not contradict the very word personal in personal data? Okay, I’ll leave you with that..

Moderator:
Thank you. She’s the only person who’s been on time since this session started. Thank you, thank you very much for that Ananya. We’re gonna take a five-minute as well, a five-minute presentation from Pedro who’s joining us online. Pedro, are you online? Yes, yes, we can hear you.

Pedro de Perdigรฃo Lana:
That’s great. So good afternoon everyone. I hope you’re all well. I’m greeting you from a 4 a.m. pre-holiday morning here in Brazil. But to get to the presentation, what I want to comment on with you today during the session, yes just let me put a time here, there we go, is the results of a research project funded by a Latin program focused on youth named Lideres 2.0. It is an amazing program with many interesting and diverse phases and I recommend you all to seek more information about it, maybe as a way to repeat the idea in your regions. And for the sake of time, back to the real content of the presentation. The idea here is simple, linking sovereignty, fragmentation, regulation as a reaction and the theme that I try to force into everything that I research, intellectual property. So governmental regulation is probably one of the most important threats we have to the internet when we are talking specifically about the dangers of fragmentation, but it’s important to see what is behind this regulatory proposal, or to be more precise, what serves as justification for these movements. The argument that I will try to put forward here is that even when this is not the real reason that motivates public authorities, especially when I’m talking about authoritarian ones, hard regulation based on digital sovereignty arguments is frequently stimulated by distinctions that are originated in what we call digital colonialism, be it from multinational tech companies or countries who have much more steering power on modeling the internet than others, even if that’s not implemented in such a direct and explicit manner. We can see this when those larger multinational companies end up extending the legal systems of their home countries to every corner of the globe, subtly imposing alien legislation even when it doesn’t follow the standards of the national laws that actually apply. This is where intellectual property comes in. The Digital Millennium Copyright Act, or DMCA, a result of the copyright reform for the Information Society in the USA, establishes systems of notification and counter-notification and other mechanisms that are severely favorable to the rights holder, the copyright rights holder, and the largest content-based platform seems to have repeated those systems all over the planet, sometimes, of course, with great support from the international lobby of the American entertainment industry. Similarly, when I go to a Brazilian page, for example, that responds to allegations of copyright infringements on these content-based platforms, I will almost always see explanations on how Fair Use works, which is an institute that simply doesn’t exist in the Brazilian legal system, since this is a country that adopts a system of limitation exceptions for permitted users of copyrighted works. Of course, this example maybe seems strange to some. So, how many people actually care about intellectual property when compared to discussions such as disinformation or freedom of expression? But apart from the effects that all these areas are umbilically linked. In Brazil, for example, we even have a provisional measure which is something like an executive bill that intended to create obstacle for content moderation through copyright mechanisms. The most important point here is just to simplify a much broader behavior that attracts a lot of negativization and may be instrumentalized by ill-intended actors. If a national platform doesn’t even care about conveying an image that will follow something as central to the idea of sovereignty as national legislation, you can only imagine what a foot plate this is for movements that want to showcase the transnational interactions that are made possible through the internet as something dangerous or something that needs to be controlled. Summing this up, internet content and platform diversification, we’re talking about user experience and language accessibility, et cetera, is not the same as fragmentation. Not only that, it’s not just not only the same, but this diversification if platform is actually adapting to certain cultural contexts may actually be an important tool against pushes for regulation that may result in fragmentation. So back to you.

Moderator:
Thank you, thank you so much for that, Pedro. That was quite insightful. And now we’ll take our last opening remark from Tevin from the Kijazi, Kenya. Yes, it’s working now.

Tevin Gitongo:
Okay, good afternoon, everyone. So my name is Tevin Mwenda Gichonga. And I think we’ve had quite a number of presentations, and mine is going to take a different tangent. Mine is going to show you how we are trying to decolonize the digital future. So we’ve had all the things that are happening, and sometimes it sounds scary. So ours is more of let’s try and actually solve it. Let’s put our money where our mouth is. And I’m going to make a short presentation of the project that we’re working on at GIZ. As you’ve heard, I work for GIZ Kenya under the Digital Transformation Center, which is a project supported by the German government and Team Europe, working together with the Kenyan government, specifically the Ministry of ICT. And in our own little way, I can’t say we are perfect, but we are trying to see how we can do this with different aspects. One thing we must recognize is that I know we’ve had a lot of presentations on AI, but when decolonizing the digital rights future, it’s just not AI only. It has to be every other facet as well that builds up to the AI. And that’s what we are trying to do in our own small way. So the project, as you can see, the objective is to support Kenya’s digital transition towards a sustainable and human-centered digital economy. And I’m going to look at two, there are three visions and missions, but I’m going to look at two major ones that affect this panel. The first one is we recognize that we must make technology work for people. And throughout the presentations you’ve had, that’s maybe where we are really going wrong, particularly in developing countries. It’s the technology being made at some point maybe is not working as ideally it was intended. The other one is to enable a rights-based and democratic digital society. So we really have to be aware of that. And so what approach did we decide to take with this, I can say, interesting experiment, is on one hand to leapfrog Kenya’s digital economy. We decided the first thing we’re going to do, and this is working together. So, I’m going to give you a brief overview of what we’re doing. So, the first thing we’re doing is we’re working with the local digital innovation ecosystem to build capacities on data protection and IT security, to foster a data-driven economy, and to work towards a decent job creation in gig economy. And all this actually build up together to enable that. The other thing that we’ve done is to build Kenya’s digital society, and this is exploring emerging tech like AI. So, we’re building a digital society, and we’re building capacities on data protection. So, I’m going to show you an example. We’re digitalizing public services, but in a user-centric way so we don’t leave anyone behind, and building capacities on data protection. And also, we focus on bridging the digital divides, and we do this by ensuring no one is left behind. So, the youth, women, rural, urban, and also persons with disabilities. So, what the approach we took is, as you can see there, what you see on the side are all our stakeholders. So, we’re not just a technology company, we’re also a digital company, and we’re trying to make sure that we’re building the capacity of an IGF in practicing, in working in everyday work that we do, because at any one moment, like in my work, I deal with all those stakeholders, because we recognize that fact. One of the best ways to actually achieve a future where you can digitalize digital rights would be you leave no one behind. So, we have governments in our teams, we have private sector, we have civil society, and we have academia. We have the big, the big, the big, the big, the big tech companies, and we have the tech companies. So, we have in our team a team of about sixty to seventy people, and we’ve got two major ones, they’re quite a number. The ones that are relevant to this. So, the first one was a study on human-centered cyber security approach. So, if you know Kenya, we are known as a fintech powerhouse in terms of the work that we do there, but out of that, we’ve got a couple of other things that we’ve done. The other thing that we’ve done is data protection and privacy from a gender perspective, and I think that’s important, because… We always forget that the most vulnerable groups, particularly when it comes to data protection, in most cases are women. So we decided to look at data protection and privacy from a gender perspective and how to enable participation online. The next thing that we did was I’m going to jump to our other, yes, strengthening, strengthening gig workers, right? So every year we publish a report where we rank digital labor platforms and under the ILO Fair Work principles and how are they performing. And the other one when it comes to AI and leaving no one behind, maybe the one that I’m always excited about is building local solutions. And one of the things that we did, for example, working with Kenyan actually, Kenyan entrepreneurs and Kenyan coders, was we are now creating chatbots. The ones, the versions that you see of open AI, but these ones now are locally created. They’re able to speak English, Swahili, a version of English and Swahili. And in that way some of these products that are created are kind of geared towards the persons and they’re able to help. So that’s just, and also in relation to PWDs, we developed the first-ever continental-wide ICT accessibility standards. So they’re just some of the few ways that we are trying to, I can say, decolonize digital rights. And I was just showing an overview of it all. Thank you very much.

Moderator:
Thank you. Thank you very much for that, Tevin. I think, you know, our collective efforts are always very well needed in this kind of issues. And our panelists have shed light on the concept of decolonization in relation to the Internet technology and human rights and online freedoms. I think it’s time that we engage in discussions that goes deeper into these. concepts and explore the synergies and trade-offs that are involved. Our objective really is to understand how we can harness these innovations and these issues to, you know, responsibly create something more sustainable and equitable for a global inclusive digital future. I would now like to ask, we would start with Jonas who is online. Jonas, what are some of the ways in which cheap labor from the global south powers contemporary digital products and services?

Jonas Valente:
Cheap labor is key for all AI development and this is why lots of companies are using digital labor platforms because these digital labor platforms, they circumvent the social protections and digital labor rights, basic digital labor rights, and sometimes we’re talking about the 19th century rights like minimum wage or freedom of association and using that those companies can benefit from this cheap labor and those workers unfortunately are not being compensated, do not have health and safety protection measures, and don’t have the rights that we talk about as once again from the 19th century to the 20th century and unfortunately this is becoming a rule in the data services global value chains including AI and that’s why we need to address this issue and talk about how to ensure those labor rights to workers all around the world but focusing specifically on what’s happening in the global south.

Moderator:
Thank you Jonas. I have a question for Shahily but before I go on to that I have a follow-up question to you Jonas. Why are these conditions so bad and how is it that the Fair Work project, how is the Fair Work project working to improve them? Jonas you have the floor. Thanks to you. Why are these conditions so bad and how is the Fair Work Project that you’re working on working to improve them?

Jonas Valente:
Currently, so far, the regulatory efforts, they are only addressing on-location platforms.

Moderator:
Okay, we’re talking about the Internet Governance Forum and, you know, we’re having internet issues online. Okay, so we’re going to go ahead and move to Shalini, since there’s internet blockage over there. Shalini, you mentioned some of the work you do at MENA during your opening remarks, and what forms does online hate and falsehood take while it’s present in the APAC region?

Shalini Joshi:
Thanks. I’m going to focus on the issue of gender in the Asia-Pacific region, and I’m going to focus on South Asia. So women, trans people, non-binary people in South Asia are regularly targeted with online disinformation, and this disinformation is propagated in an attempt to silence already marginalized individuals and make it difficult for them to safely participate in public discourse. Much of the work on gender disinformation covers women in politics and those in the public domain. Research also shows the narrow definitions of gender disinformation and the current focus on women public figures are sometimes sidelining affected girls and women and gender minorities who do not have a public presence. Gender disinformation, as we know, can take many forms. That includes hate speech, intentionally misleading information. and rumors, attacks on the character and affiliations of people, and attacks on private and public lives of people, which impacts people in a way that they are either self-censoring or removing their social media contents or living in hiding. There are direct and indirect threats to their lives, and also generally enforcing stereotypes of vulnerability. So what we’re trying to do at NIDAN is that we are developing a data set on instances of gender disinformation to build more evidence for supporting research and policy action. And we have brought together a diverse set of stakeholder groups in South Asia to work collaboratively to define gender disinformation from a South Asian perspective, to identify, document, and annotate a high-quality data set of gender disinformation and hate in online spaces for better understanding and countering the issue. We’re going to use machine learning techniques in the process. And as we document more instances of gender disinformation online, we feel that the technology that we use will also become better at locating additional content and thereby creating a virtuous cycle.

Moderator:
Thank you, Shalini. Thank you for that. When you started answering the question, I was going to make a follow-up question about some of the best practices and measures that you guys have taken in place, put it in place rather, to counter online hate that target marginalized communities. And with regards to your context, you’re talking about women. But you answered that when you were talking about the data set that you guys are developing. So, thank you for that. Ananya, you talked about, when you were making opening remarks, you talked about data, a lot about data and how really it’s affected. It’s the key, it’s oil. And so, what are some of the implications of data colonialism and surveillance capitalism on digital rights? And how can individuals and communities really reclaim control over their personal data that they sometimes are not even aware that they’re giving out? And how do they protect their privacy in the digital realm?

Ananya Singh:
Yes, apparently it’s no longer oil, but it’s sunlight. Well, historically, the era of colonialism ushered in by boats that came to the new world to expand empires through infrastructure building and precious metals extraction. Now, like every other thing, colonialism is also going digital. It establishes extensive communication, networks like social media and harvest the data of millions to influence things as simple as advertising and as critical as elections. Data colonialism justifies what it does as an advance in scientific knowledge, personalized marketing or rational management, just as historical colonialism claimed itself to be a civilizing mission. But yes, some things have changed. If historical colonialism annex territories, their resources and the bodies that worked on them, data colonialism’s power grab clusters around the capture and control of human life itself through appropriating the data that can be extracted from it for profit. Data colonialism is global, playing out in both the global north and the global south, dominated by powerful forces in both the east and the west. Unfortunately, regardless of who directs these practices or where these practices take place, they often lead to the erosion of privacy rights such as individual’s personal data is. collected, and analyzed, and used without their knowledge or explicit or informed consent. And like you saw in the example that I gave you about the spam calls I get, there is little to no retracement mechanism. I mean, yes, I can block and report, but can I live happily ever after? No. Because there will be yet another company which has actually employed another spammer waiting to call me again to sell their policies. My data, your data, are now in the hands of so many people that it is going to be extremely difficult for us to individually trace and then erase our data. Hence this will ultimately result in a loss of autonomy and control over our own personal information. While our data may be widely dispersed, the power to capture and control our data continues to remain concentrated in the hands of a few. This can lead to a lack of transparency, accountability, and democratic controls over data practices, potentially undermining individuals’ rights and freedoms. The collection and analysis of personal data can perpetuate existing inequalities like some of my able panelists have already mentioned. Training emerging technology on biased data can lead to biases in algorithms, unfair targeting, exclusion, discrimination, and the list goes on. These practices can also be used to manipulate and influence individuals’ behaviors, opinions, choices, threatening individuals and democracies. We have seen that happening already. Undeniably, ideologies such as connection, building communities, personalization will keep incentivizing corporations to collect more of our personal data. Hence the only way to prevent data colonialism from further expanding is to question these very ideologies. Individually, we must prioritize data minimization, like be mindful of the information we share online or limit the amount of personal data we share. technology platforms. I personally do this by limiting my social media presence, which by the way is very good for your mental health as well. I like to call this digital minimalism. Further, think twice before you agree to the terms and conditions. While it is easy to be fatigued by the almost incomprehensibly long document written in complicated language, take time to think before giving into an impulse of clicking on I agree. So I’ll stop with that because I don’t want to take more time than I have been allocated. Thank you..

Moderator:
Thank you for that Ayanya. That’s quite insightful. However, I do have a comment to make. Honestly, we want people to be able to be at ease, be at comfort, be safe on the internet, not have to restrict themselves from using the internet or using social media. So I think this is something that we actually have to talk about again another session maybe or towards the end of this session about how we also have to talk about data, making sure that data is utilized properly with purpose, not just for spam calls like you experience. I will move to Pedro who’s online. Pedro, my question to you is, do multinational platforms care about the legal and cultural particularities of the countries in which they operate?

Pedro de Perdigรฃo Lana:
Yeah, I will try to shorten up my presentation as well so we can give the floor back to Jonas at the end of this section so he can finish his. But I do think they care because especially generating conflicts with geocultural and legal particularities of the markets in which you are trying to sell your services usually mean less profits or at least more costs. But these concerns just go as far as the immediate costs of this adaptation can be considered not too high and this is a problem when you consider the difficulty of measuring the indirect and long-term costs that platform would certainly suffer in a fragmentation scenario. For example, while platforms investigated in the research project translated their main pages about intellectual property policies, but when you browse for more details, you’ll notice that not even something as simple as the translation of some pages were normally done, or even the interlinks led us to English versions. One of them, which was not content-based, had only the most basic page translated.

Moderator:
And how is this reflected in the global human rights system that, as a rule, it still has the sovereignty of national legal systems that determine the factors of jurisdictional conflicts, rather?

Pedro de Perdigรฃo Lana:
Well, I think that this reflects directly on human rights. Intellectual property is itself globally considered a human right, but what I mean here is that, although we have some international frameworks, human rights are not interpreted the same worldwide. So freedom of expression is a good example. Some cultures see it as a much broader right than others. Copyright itself may be stronger or weaker when confronted with other fundamental rights, such as education or access to culture. So if platforms need to re-inform their policies around such concepts, they should at least do it in a way that is not so clearly unbalanced towards a single perspective. Specifically saying that the user should follow as a guidance on external legislation is, quite frankly, a bit offensive, since it really wouldn’t cost that much to get someone to do a quick review on the legal policies to deliver some adaptation, even if superficial. The problem here is this image that those platforms simply do not care to some basic elements of some societies that they have as markets for their services and products, especially when we see that they can evidently adapt. So as one can observe with changes made because of the German legislation called NSDG, especially on social media.

Moderator:
All right, thank you. Thank you for that. And I will move on to Tevin here. Tevin talked during his opening, has talked a lot about what Jersey is doing with regards to working with the communities, especially the marginalized communities. And I wanna ask you, how can digital literacy and digital skills training be re-imagined in a way that is such that it empowers the marginalized communities and bridges the digital divide? And in such a way that it ensures that everyone has the necessary tools to fully participate in the digital realm?

Tevin Gitongo:
Thank you for the question. And I think I’ll pick up from the question you asked earlier on do large entities care about the legal and cultural considerations. I think the lesson I’ve learned is you have to care about the cultural considerations for you to have any impactful trainings or digital skills. It’s a case of you have to bring yourself to the level and be there with the partner that you want to achieve the training to. And maybe thinking of it practically is, so how do you do that? How do you actually demonstrate that you are aware of the person’s context and how you can help them to kind of bring them up to where you want them to be in terms of lessening the digital divide? And I think how I look at it is, I normally look at it like a four step. And the first one is the stakeholders that you work with. Because more or less or not, you’re always guilty of working with stakeholders who have no clue what’s happening on the ground. You know, you go there, you tell them you’re going to do this, then they tell you, yes, we’ve done a training. Then you go on the ground, you realize, oh, this was a wrong stakeholder. Clearly, I did not understand what was happening in this context. And that way, you’re really, really. the training doesn’t have impact. The next thing I think I look to is accessibility. And how I look at that is in relation to democratizing the knowledge. And by this I say, when you do a training, it should be one that you are actually transferring knowledge, not just ticking a box. I’ve seen there’s a huge difference there, because most cases we are ticking boxes, but you’re not actually transferring knowledge, a knowledge that actually helps them grow. And one of the things we’ve done with that, I’ll give an example of, and I see my colleague is also here, when we were developing the AI chatbots, we basically brought, because it is a skill we were trying to transfer, we brought Kenyan developers in the room, we brought other developers, I think it was from Europe, who have expertise in developing such models. And we were like, we want you guys to teach them, to teach each other, actually, not just to teach them, it’s to teach each other how to develop this, because they are coming with indigenous knowledge of how Swahili works to develop an NLP in Swahili, or a mixture of Swahili and English. Maybe they come with the knowledge of how to develop these systems. And what happened is after they built the first system, the next system that we’re developing, because we’re developing another one for the Kenya’s Data Protection Commissioner, it’s the Kenyans that are running the show now. It’s them who are developing everything. So you start seeing, you’re slowly reducing that gap. The next thing is affordability, of course. If you really want to create any impact, you have to create training that people can, it also goes back to accessibility. And lastly, inclusion of everyone. And this can also be done practically, and one of the things I think I mentioned we assisted developing is the ICT standards for persons with disabilities for Kenya. So whenever you’re designing a system, how do you design it for persons with disabilities and you don’t leave them, given that Kenya is digitizing a lot, but we are forgetting that whole area. So, I think that’s it. I think that’s it. Thank you.

Moderator:
Thank you so much for that, Tevin. I think that, you know, with everything that all the panelists have said, it always goes back to bridging the digital divide. Digital skills, making sure that people are aware of these things and they know how to protect themselves, they know how to use it, and they know what the issues are and how to tackle them, and when it comes to any matter of Internet governance, it’s very important to make sure that people are aware of these things and they know how to protect themselves, and that’s what we’re going to talk about today. We’re going to go back to Jonas, who had issues online, but I think we have some time that we can spare. So, he’s back now, and he was going to tell us about, we were talking about the ways in which cheap labour from the global south empowers contemporary digital products and services. Jonas, can you please tell us about why these conditions are so bad, and how is the Fair Work project working on improving these conditions?

Jonas Valente:
These conditions are bad because platforms find, found a way, I think my connection will, I think I will freeze again. I hope I don’t. Because platforms found a way of sync-conventing digital labour and social protections, and by doing that, companies can hire cheap labour, and that’s why we’re seeing low-pay health and safety issues. So, we’re seeing a lot of platforms and management problems all around the world. A study has estimated 163 million online workers. So, this is a very representative number of people. The Fair Work project assessed that platforms all around the world in those 38 countries, so we analysed and scored those platforms according to five principles. So, pay conditions, contract management, and representation. So, I invite all of you to visit our website, fair.work, and you can see, maybe, platforms from your country and check what they are doing. doing or what they are not doing to meet basic standards of fair work.

Moderator:
Okay, thank you for that. I would like to thank all our speakers both on site and online for sharing their insights, sharing their experiences and the efforts that they’re working on and I would open the floor to four questions both on site and online. I don’t know if we, online moderator, do we have any questions online? If you’re on site and you have a question you may go to one of the standing mics. You state your name, the country you’re from and go ahead with your question. We have one question on site.

Audience:
Hi, my name is Daniele Turra. I am one of the youth ambassadors from Internet Society. I’m from Italy. So as the panelists anticipated, I am understanding there are a lot of stereotypes such as specific legal diversities that are not always respected, also lack of accessibility, also the need to respect privacy and all these different problems and needs are not always really respected and all of that is because of economic patterns and interests worldwide. But some of them, for example privacy, I would argue are also global rights. We can discuss about also being them human rights. I would very be interested to see, let’s say, a taxonomy of specific local needs that are not respected by specific technologies of the global north when it comes to culture, history or political characteristics. So I would like also to understand which are shared also with the global north and which are not. And with not I mean not regarding people originally born in the Global South that lately got to live in the Global North, but specifically populations that plan to thrive in their own country of origin. So the idea is to understand which needs are local and which are global.

Moderator:
Thank you. Okay, do you want Jonas to answer that or it’s open to any of the speakers? Sir? Okay, so Jonas, do you wanna take up this question?

Jonas Valente:
No, I think that what I would like to say is that when we talk about this national and cultural context, what Fair Work Project is bringing is that we have one very serious problem that has been addressed here by other speakers that is the biases, discrimination that is faced by users. But we also need to consider what is behind the digital technology production. And that’s why we highlight this discrimination and the consideration of the local context. And for instance, when Pedro brings the discussion on national regulations, we also need to consider as well the national regulations about work and how those national regulations, the national and local context and the different populations and the diversity of populations and cultural expressions can be considered in its own characteristics in the internet as a whole, but especially in digital labor platforms and global platforms. And that’s why I believe that this discussion that Pedro brought and now that we have the conversation needs to look to those diverse contexts and groups. And at the same time, think about how to incorporate them also not only in the digital and data practices, but in the regulatory. efforts as well.

Moderator:
Okay, thank you. I’m also going to invite Tevin to address the question in a very brief remark. Yes.

Tevin Gitongo:
So, maybe you asked what are local, what are international. So, international I’ll say privacy. I think you’ve alluded to it. It affects everyone, doesn’t matter if it’s global north or global south. We see it in Kenya every day. We have a data commissioner’s office and we actively work with them and the same issues that are raised in European countries are the same issues that are raised in Kenya even when it comes to AI products. It’s why am I getting these marketing messages, how did you take my data, issues of consent, where are you using it, where. Kenyans have become very interesting. They’ve been asking where are you transferring the data to. You know, they’re asking questions that you will find anywhere and this is not just I can say the urban Kenyan, it’s even the rural Kenyan. You’ll go and talk to them and they’re like okay I saw this application, however they told me to do this and I’m wondering why they told me to do this. So, it’s something that everyone is aware of. In relation to local, I’ll say languages because when you’re developing, for example, natural language processes, sadly most of them are geared towards global north. How the English, the pronunciation of the words is very different, the language is being used but it’s time we start looking at local aspects of it, especially languages because that’s the only way you start bridging the digital divide because not everyone will speak fluent English or fluent Swahili and you need to develop products that cater for their needs.

Moderator:
Yes, and that is going to bring me to a question for Pedro. Pedro, is the risk of regarding the search for balance of power relations between countries, is it a risk and how does this affect the Internet as a global network?

Pedro de Perdigรฃo Lana:
Yeah, I would like to build upon what was just said by the previous speaker, that I would use the same example but inverted. I think language is an international issue, because even though we adapt to each country, it’s the same issue that we have around the world. Privacy on the other side, you can have different interpretation of privacy, what’s most important, what’s not. And that’s exactly what is especially dangerous when we are talking about platforms not diversifying what they are doing and how they do not do that in an international level. They prefer some regions to the others. So in a period that international relations are becoming increasingly tense and discourses against external threats are on the rise, it seems very easy not just to expose those true facts about how these relations work, such as talking about how these platforms may be an instrument of expanding the influence of a certain country or even acting directly on their behalf, as we learned with Snowden, but it’s also easy to extrapolate this context to get support for action that presents the international nature of the Internet as a problem in itself. So doing those small things, such as translating the contents correctly, adapting to national legislation, may be exactly what we need to avoid having a splinternet, having the Internet as we know it severely affected in an active way.

Moderator:
Thank you. Thank you for that. We have learned that we have some questions from the online participants and I would like to call on to Nelly, our online moderator, to ask the questions out loud for the audience. Nelly, you may take the floor please. Are you with us, Nelly? Okay, it seems Nelly is not with us and any other question from the audience here, on-site participants. Nelly, we think you’re muted, Nelly. Please unmute your mic and take the floor. Technical, can you please give, can you please help us give Nelly the floor to Nelly, please? Unmute her mic. Okay, if there are no other questions, and it seems we’re rounding up the session today. I’m actually very thrilled again to invite our speakers to share their invaluable recommendations to the following. What should be good? Was that Nelly? Nelly, is that you? Okay, we’re gonna go ahead. If Nelly happens to unmute her mic, we’ll just take questions from her. But until then, we’re gonna, I’m going to ask our panelists here who have shared their insights and experiences for their recommendations regarding the following questions. What should decolonizing digital rights look like? But before I give you the floor, I would also like to strongly encourage the audience to seize this opportunity to share your recommendations again by scanning the QR code that’s displayed, that’s going to be displayed on the screen shortly. And now I would like to welcome Ananya. Please go ahead, tell us what should decolonizing the internet look like?

Ananya Singh:
I think this has just been done because I finished ahead of time. All right. Well, let’s say this, my blood group is B positive. There you go. You have another one of my personal data points. Anyway, being the positive person that I apparently am, I believe that every cloud has a silver lining. So this cloud of data colonialism presents an opportunity for us, an opportunity to create ethical systems which run on the principles of, A, ownership by design, where users are provided with clear and understandable information about how their data will be collected, used, and processed, shared, stored, or erased. It involves obtaining informed consent that is granular and specific, allowing individuals to make informed choices about their data. B, minimization and anonymization. Only the necessary and most relevant data is collected and processed. And wherever possible, such data is kept anonymous and encrypted. This reduces the risks associated with data breaches and unauthorized access while respecting individuals’ privacy. C, there should be an option to be forgotten or easily revoke consent when desired. I know there are options to be forgotten, but the option to revoke consent has been a complicated process so far. D, mechanisms for accountability and redressal in case of data breaches or privacy violations are hard to find. This involves providing individuals with avenues to exercise their rights, report violations, seek remedies for any harms. And this should and must go beyond blocking and reporting accounts. And E, I just want to finally take note of this. The whole entitled attitude that makes data colonialism possible must be done away with. Spelled simply, for example, I was born with a name. My name is a data point. Just because I provided my name to my school on the day of enrollment does not automatically translate into their unprecedented right over an unchecked use of my name for the rest of their existence. Data use is not a right, but it’s a permission. data reuse is not an entitlement, but once again, a permission slip. Thank you.

Moderator:
Thank you. Thank you so much for that, Ananya. And I think we have access to Nellie now, so we’re going to take the question from her online. Nellie, you may unmute your mic, please, and ask the question to our panelists.

Audience:
Thank you for letting me turn on my mic. Initially, the question arose, according to your very interesting discussion, is like this. How can digital literacy and skills training be reimagined to empower marginalized communities and bridge the digital divide, ensuring that everyone has the necessary tools to fully participate in the digital world?

Moderator:
Can you please repeat the question, Nellie?

Audience:
Yes, of course. How can digital literacy and skills training be reimagined to empower marginalized communities and bridge the digital divide, ensuring that everyone has the necessary tools to fully participate in the digital world?

Tevin Gitongo:
Okay, Tevin is going to take the question.

Ananya Singh:
I think just to help Tevin answer the question, I think it basically means how we could use or at least program and structure digital literacy programs, which would, I assume, will help people to better navigate in a world which is more decolonized. So how could digital literacy aid the process of decolonizing the Internet?

Tevin Gitongo:
Yes, I think for that, I’m a proponent of, and I think I keep on reiterating, you have to bring yourself to the shoes of the person. So I’m going to give a good example of, we were having a discussion recently, there is this, in Kenya we have this So, we have a lot of women who are selling food, and we have a lot of women who are selling groceries, and we have a lot of women who are selling groceries, and we have a lot of women who are selling groceries. So, they have this little talk shops, whenever you go, you go buy groceries. And we were thinking how do you enable them, for example, use digital tools to enable the sale of their products. So, that’s what we were thinking about. So, that’s what we are trying to do with our projects, is it’s always us telling them come, but now it’s how do we go to them, and how do you go to them at their level and work with their skill, because they really have a lot of that skill, and just empower that. And I think that’s what the challenge and discussion should be. And it’s also something that we should be thinking about, like, how do you go there, how do you work with them where they are? And I can’t say we have the complete answer to that. It’s a learning process, but I’m a big proponent of find people where they are. Don’t make them come to you, because that’s more burden. You look for them and work with them from where they are, because one of the things that study was showing, even when you were talking to them, was how much knowledge they have. They do have a lot of that. For example, one of them was telling us, you know, you click on this, and I don’t know what I’m clicking on, but it doesn’t make sense when I read it, as Anja just said. It’s the terms and conditions, and it’s, like, 30 pages. Just say I agree, and you move on. But they are cognizant of the fact I’m giving away my data, that they’re cognizant of, so perhaps it’s coming to them, and also breaking it down to a point that they also understand.

Moderator:
I like that you mentioned that, because there’s a recent principle that I learned in digital development that, you know, you have to design with the user. If you’re, because at the end of the day, if you’re looking to benefit them and they need to be actively involved in the process, you need to know what their challenges are, what their perspectives are, what they think is gonna benefit them and include that in the process. And that’s fair too. I would like to continue the recommendations that we’re getting from our panelists about what should de-economizing digital rights look like.

Audience:
Ananya has given hers, so we’re gonna move on to Jonas online, who will share his. By the way, please note that there is, I’m actually sharing my screen and there’s a QR code where you’re supposed to, it should be on the padlock, it should be on the screen. Yes, that one. You can just scan it and, or I’ll send the link in the chat for the online participants as well to make their comments as well. So Jonas, please share your recommendations on what decolonizing digital rights should look like.

Jonas Valente:
Thank you so much. I would say that decolonize digital technologies involve not only decolonize the use of digital technologies, but also the production process. And that’s why we need to incorporate the labor dimension to our decolonization agenda. And this means to ensure not only basic standards of fair work that it’s why, what we are assessing in our project, but a radically and structurally different work arrangement where workers are not exploited, where we don’t have international, national, local and population groups and symmetries. And we are where workers are not exploited anymore. So I believe that we need to incorporate this to our agenda and to quote a Latin American philosopher called Enrique Dussault. He says that it’s not only about decolonize, but it’s about to liberate oppressed people and to create something radically new.

Moderator:
Thank you for that, Jonas.

Shalini Joshi:
Shalini. Yeah. I’m going to be very brief and say that in order to decolonize digital rights, it’s really important to look at who’s being included in the process of creating digital tools. We have to involve hyperlocal communities in creating data sets, something that I talked about earlier as well. We also have to make sure that there are people from marginalized communities who are involved in analyzing the data, annotating the data, in actually creating the technology, because it’s these people who understand the context, the language, and the issues much more than technologists and coders and developers sitting somewhere else. So involving the people in the creation of the technology, making processes more inclusive, ensuring that many, many languages are being included in the way that we analyze data, all of that is really important.

Moderator:
Thank you. Thank you for that. And Pedro, what are your recommendations?

Pedro de Perdigรฃo Lana:
Yeah, I already talked a bit on my last comments, but just to be very brief, I think that platforms should try to diversify a little bit more to adapt to those local cultures and scenarios and countries that are historically more influential in directing how the internet is modeled should actively try to share these powers, these capacities. It’s just not about just decolonizing the digital space, but preserving the internet as you know it, as a global network and a force for good. So, that’s it. Thanks for your attention, everyone.

Moderator:
All right, thank you. And we’ll take our last recommendation from our final panelist, Tevin.

Tevin Gitongo:
Perhaps my recommendation will be to ask ourselves four fundamental questions. And some of them have been alluded. The first one is, who? Who’s developing the systems? Second one is, why are they doing it? In most cases, it’s for economic gain. If you’re being honest, the baseline of this whole conversation is economic gain. And what they stand to benefit. As Anja said, data is the new sun, and let’s call it the politics of data. Everyone wants to be the ruler of data now. Second is, where are they being developed? Are they being developed where the marginalized people you’re targeting are? Because where it’s being developed, because someone sitting in Silicon Valley, to be quite honest, they’re not really thinking of me as a Kenyan using their AI product. I am the last person in their mind. Because of where they are. The last thing is, what? What is it for? By the end of the day. Yeah, thank you very much.

Moderator:
Yes, that’s very true. And I think all our panelists have shared very thought-provoking and insightful experiences and insightful expertise on this topic. But as we conclude this session today, I’d also like to express my sincerest gratitude to our online and onsite panelists for their expertise and thought-provoking contributions. Your insights have been very instrumental to deepening our understanding of the complexities that surround the decolonization of the Internet and technology. And I’d also like to thank the audience, of course, both onsite and our online audience, for your engagement and for your questions and for being here today. Your participations have very enriched our discussions. In closing, I would like us to remember that the journey towards a decolonized Internet and digital landscape, it’s ongoing. It’s not static. It’s not something that’s already established. It’s ongoing and it’s a learning process. It requires continuous reflections, dialogue, and call to actions. He talked about who’s benefiting, what, and economic gain and all of that. And I think that together, we can strive for a digital space that is inclusive and respects and empowers all individuals, all communities, regardless of their background, regardless of their geographical location. We have to work together in order to create a future where the internet truly becomes a force of equality, justice, and liberation. Thank you, and that is it for this session. Thank you all.

Audience:
So we have another session in, by, well, in, I think, 22 minutes. I’ll take it from there. 22 minutes, yes. So we have another session happening here at 1730 Japanese Standard Time. So I hope you’ll stay with us. If you want, you can quickly grab something to drink or eat meanwhile, and if you’re going outside, I would request you to bring in your colleagues and friends to join us for the next session. Thank you very much for attending. Thank you.

Jonas Valente

Speech speed

157 words per minute

Speech length

1618 words

Speech time

618 secs

Pedro de Perdigรฃo Lana

Speech speed

168 words per minute

Speech length

1536 words

Speech time

549 secs

Ananya Singh

Speech speed

168 words per minute

Speech length

1898 words

Speech time

678 secs

Audience

Speech speed

158 words per minute

Speech length

519 words

Speech time

197 secs

Moderator

Speech speed

169 words per minute

Speech length

3814 words

Speech time

1356 secs

Shalini Joshi

Speech speed

133 words per minute

Speech length

1247 words

Speech time

561 secs

Tevin Gitongo

Speech speed

218 words per minute

Speech length

2718 words

Speech time

749 secs